From 8442a92e5a193dd63593ee336158d8ec92e81fff Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 1 Nov 2003 01:56:29 +0000 Subject: [PATCH] Spell checking, consistent terminology. --- doc/src/sgml/arch-dev.sgml | 42 +- doc/src/sgml/array.sgml | 12 +- doc/src/sgml/catalogs.sgml | 30 +- doc/src/sgml/datatype.sgml | 1020 +++++++++++++------------- doc/src/sgml/ddl.sgml | 20 +- doc/src/sgml/func.sgml | 16 +- doc/src/sgml/indexcost.sgml | 26 +- doc/src/sgml/installation.sgml | 7 +- doc/src/sgml/jdbc.sgml | 34 +- doc/src/sgml/libpq.sgml | 70 +- doc/src/sgml/lobj.sgml | 6 +- doc/src/sgml/maintenance.sgml | 53 +- doc/src/sgml/monitoring.sgml | 24 +- doc/src/sgml/nls.sgml | 14 +- doc/src/sgml/page.sgml | 16 +- doc/src/sgml/perform.sgml | 4 +- doc/src/sgml/plpgsql.sgml | 14 +- doc/src/sgml/protocol.sgml | 38 +- doc/src/sgml/queries.sgml | 12 +- doc/src/sgml/query.sgml | 4 +- doc/src/sgml/ref/alter_sequence.sgml | 113 +-- doc/src/sgml/ref/create_table.sgml | 6 +- doc/src/sgml/ref/create_type.sgml | 6 +- doc/src/sgml/ref/create_view.sgml | 6 +- doc/src/sgml/ref/pg_ctl-ref.sgml | 6 +- doc/src/sgml/ref/psql-ref.sgml | 4 +- doc/src/sgml/ref/select.sgml | 4 +- doc/src/sgml/release.sgml | 54 +- doc/src/sgml/rules.sgml | 4 +- doc/src/sgml/runtime.sgml | 12 +- doc/src/sgml/syntax.sgml | 14 +- doc/src/sgml/typeconv.sgml | 6 +- doc/src/sgml/xfunc.sgml | 4 +- doc/src/sgml/xindex.sgml | 16 +- doc/src/sgml/xoper.sgml | 24 +- doc/src/sgml/xtypes.sgml | 6 +- 36 files changed, 877 insertions(+), 870 deletions(-) diff --git a/doc/src/sgml/arch-dev.sgml b/doc/src/sgml/arch-dev.sgml index 73ad8a057e..090a64c9a8 100644 --- a/doc/src/sgml/arch-dev.sgml +++ b/doc/src/sgml/arch-dev.sgml @@ -1,5 +1,5 @@ @@ -99,11 +99,11 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.22 2003/09/29 18:18:35 mo The executor recursively steps through the plan tree and - retrieves tuples in the way represented by the plan. + retrieves rows in the way represented by the plan. The executor makes use of the storage system while scanning relations, performs sorts and joins, - evaluates qualifications and finally hands back the tuples derived. + evaluates qualifications and finally hands back the rows derived. @@ -150,7 +150,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.22 2003/09/29 18:18:35 mo to the backend (server). The query is transmitted using plain text, i.e. there is no parsing done in the frontend (client). The server parses the query, creates an execution plan, - executes the plan and returns the retrieved tuples to the client + executes the plan and returns the retrieved rows to the client by transmitting them over the established connection. @@ -195,8 +195,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.22 2003/09/29 18:18:35 mo The lexer is defined in the file scan.l and is responsible for recognizing identifiers, - the SQL keywords etc. For - every keyword or identifier that is found, a token + the SQL key words etc. For + every key word or identifier that is found, a token is generated and handed to the parser. @@ -278,7 +278,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.22 2003/09/29 18:18:35 mo call. This may be transformed to either a FuncExpr or Aggref node depending on whether the referenced name turns out to be an ordinary function or an aggregate function. - Also, information about the actual datatypes of columns and expression + Also, information about the actual data types of columns and expression results is added to the query tree. @@ -297,9 +297,9 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.22 2003/09/29 18:18:35 mo - The first one worked using tuple level processing and was + The first one worked using row level processing and was implemented deep in the executor. The rule system was - called whenever an individual tuple had been accessed. This + called whenever an individual row had been accessed. This implementation was removed in 1995 when the last official release of the Berkeley Postgres project was transformed into Postgres95. @@ -396,11 +396,11 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.22 2003/09/29 18:18:35 mo nested loop join: The right relation is scanned - once for every tuple found in the left relation. This strategy + once for every row found in the left relation. This strategy is easy to implement but can be very time consuming. (However, - if the right relation can be scanned with an indexscan, this can + if the right relation can be scanned with an index scan, this can be a good strategy. It is possible to use values from the current - row of the left relation as keys for the indexscan of the right.) + row of the left relation as keys for the index scan of the right.) @@ -419,8 +419,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.22 2003/09/29 18:18:35 mo hash join: the right relation is first scanned and loaded into a hash table, using its join attributes as hash keys. Next the left relation is scanned and the - appropriate values of every tuple found are used as hash keys to - locate the matching tuples in the table. + appropriate values of every row found are used as hash keys to + locate the matching rows in the table. @@ -428,7 +428,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.22 2003/09/29 18:18:35 mo The finished plan tree consists of sequential or index scans of - the base relations, plus nestloop, merge, or hash join nodes as + the base relations, plus nested-loop, merge, or hash join nodes as needed, plus any auxiliary steps needed, such as sort nodes or aggregate-function calculation nodes. Most of these plan node types have the additional ability to do selection @@ -451,26 +451,26 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.22 2003/09/29 18:18:35 mo The executor takes the plan handed back by the planner/optimizer and recursively processes it to extract the required set of rows. This is essentially a demand-pull pipeline mechanism. - Each time a plan node is called, it must deliver one more tuple, or - report that it is done delivering tuples. + Each time a plan node is called, it must deliver one more row, or + report that it is done delivering rows. To provide a concrete example, assume that the top node is a MergeJoin node. - Before any merge can be done two tuples have to be fetched (one from + Before any merge can be done two rows have to be fetched (one from each subplan). So the executor recursively calls itself to process the subplans (it starts with the subplan attached to lefttree). The new top node (the top node of the left subplan) is, let's say, a Sort node and again recursion is needed to obtain - an input tuple. The child node of the Sort might + an input row. The child node of the Sort might be a SeqScan node, representing actual reading of a table. Execution of this node causes the executor to fetch a row from the table and return it up to the calling node. The Sort node will repeatedly call its child to obtain all the rows to be sorted. When the input is exhausted (as indicated by the child node returning - a NULL instead of a tuple), the Sort code performs + a NULL instead of a row), the Sort code performs the sort, and finally is able to return its first output row, namely the first one in sorted order. It keeps the remaining rows stored so that it can deliver them in sorted order in response to later demands. @@ -508,7 +508,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.22 2003/09/29 18:18:35 mo result row. But INSERT ... SELECT may demand the full power of the executor mechanism.) For UPDATE, the planner arranges that each computed row includes all the updated column values, plus - the TID (tuple ID, or location) of the original target row; + the TID (tuple ID, or row ID) of the original target row; the executor top level uses this information to create a new updated row and mark the old row deleted. For DELETE, the only column that is actually returned by the plan is the TID, and the executor top diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 68c6dc40db..fc87fdf43f 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -1,4 +1,4 @@ - + Arrays @@ -348,9 +348,9 @@ SELECT ARRAY[5,6] || ARRAY[[1,2],[3,4]]; When a single element is pushed on to the beginning of a one-dimensional array, the result is an array with a lower bound subscript equal to - the righthand operand's lower bound subscript, minus one. When a single + the right-hand operand's lower bound subscript, minus one. When a single element is pushed on to the end of a one-dimensional array, the result is - an array retaining the lower bound of the lefthand operand. For example: + an array retaining the lower bound of the left-hand operand. For example: SELECT array_dims(1 || ARRAY[2,3]); array_dims @@ -368,9 +368,9 @@ SELECT array_dims(ARRAY[1,2] || 3); When two arrays with an equal number of dimensions are concatenated, the - result retains the lower bound subscript of the lefthand operand's outer - dimension. The result is an array comprising every element of the lefthand - operand followed by every element of the righthand operand. For example: + result retains the lower bound subscript of the left-hand operand's outer + dimension. The result is an array comprising every element of the left-hand + operand followed by every element of the right-hand operand. For example: SELECT array_dims(ARRAY[1,2] || ARRAY[3,4,5]); array_dims diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index a30f0f867b..a93b0587cf 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -1,6 +1,6 @@ @@ -755,9 +755,9 @@ int4 - Always -1 in storage, but when loaded into a tuple descriptor + Always -1 in storage, but when loaded into a row descriptor in memory this may be updated to cache the offset of the attribute - within the tuple. + within the row. @@ -800,9 +800,9 @@ If true, this attribute is a set. In that case, what is really - stored in the attribute is the OID of a tuple in the + stored in the attribute is the OID of a row in the pg_proc catalog. The - pg_proc tuple contains the query + pg_proc row contains the query string that defines this set, i.e., the query to run to get the set. So the atttypid (see above) refers to the type returned by this query, but the @@ -1046,7 +1046,7 @@ float4 - Number of tuples in the table. + Number of rows in the table. This is only an estimate used by the planner. It is updated by VACUUM, ANALYZE, and CREATE INDEX. @@ -1553,7 +1553,7 @@ xid - All tuples inserted or deleted by transaction IDs before this one + All rows inserted or deleted by transaction IDs before this one have been marked as known committed or known aborted in this database. This is used to determine when commit-log space can be recycled. @@ -1564,7 +1564,7 @@ xid - All tuples inserted by transaction IDs before this one have been + All rows inserted by transaction IDs before this one have been relabeled with a permanent (frozen) transaction ID in this database. This is useful to check whether a database must be vacuumed soon to avoid transaction ID wrap-around problems. @@ -1666,7 +1666,7 @@ refobjid oid - any oid attribute + any OID column The OID of the specific referenced object @@ -1945,7 +1945,7 @@ indkey int2vector - pg_attribute.attnum + pg_attribute.attnum This is an array of indnatts (up to INDEX_MAX_KEYS) values that indicate which @@ -2407,7 +2407,7 @@ opcamid oid pg_am.oid - Index access method opclass is for + Index access method operator class is for @@ -3233,7 +3233,7 @@ tgtype int2 - Bitmask identifying trigger conditions + Bit mask identifying trigger conditions @@ -3534,7 +3534,7 @@ For types used in system tables, it is critical that the size and alignment defined in pg_type agree with the way that the compiler will lay out the column in - a struct representing a table row. + a structure representing a table row. @@ -3611,8 +3611,8 @@ typndims is the number of array dimensions - for a domain that is an array (that is, typbasetype is an array type; - the domain's typelem will match the base type's typelem). + for a domain that is an array (that is, typbasetype is an array type; + the domain's typelem will match the base type's typelem). Zero for types other than array domains. diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index 46748935d2..2c9a45e8ad 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -1,5 +1,5 @@ @@ -165,9 +165,9 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.127 2003/10/22 18:10:53 m numeric [ (p, - s) ] + s) ] decimal [ (p, - s) ] + s) ] exact numeric with selectable precision @@ -295,71 +295,71 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.127 2003/10/22 18:10:53 m - Name - Storage Size - Description - Range + Name + Storage Size + Description + Range - smallint - 2 bytes - small-range integer - -32768 to +32767 + smallint + 2 bytes + small-range integer + -32768 to +32767 - integer - 4 bytes - usual choice for integer - -2147483648 to +2147483647 + integer + 4 bytes + usual choice for integer + -2147483648 to +2147483647 - bigint - 8 bytes - large-range integer - -9223372036854775808 to 9223372036854775807 + bigint + 8 bytes + large-range integer + -9223372036854775808 to 9223372036854775807 - decimal - variable - user-specified precision, exact - no limit + decimal + variable + user-specified precision, exact + no limit - numeric - variable - user-specified precision, exact - no limit + numeric + variable + user-specified precision, exact + no limit - real - 4 bytes - variable-precision, inexact - 6 decimal digits precision + real + 4 bytes + variable-precision, inexact + 6 decimal digits precision - double precision - 8 bytes - variable-precision, inexact - 15 decimal digits precision + double precision + 8 bytes + variable-precision, inexact + 15 decimal digits precision - serial - 4 bytes - autoincrementing integer - 1 to 2147483647 + serial + 4 bytes + autoincrementing integer + 1 to 2147483647 - bigserial - 8 bytes - large autoincrementing integer - 1 to 9223372036854775807 + bigserial + 8 bytes + large autoincrementing integer + 1 to 9223372036854775807 @@ -727,7 +727,7 @@ CREATE TABLE tablename ( column should be assigned its default value. This can be done either by excluding the column from the list of columns in the INSERT statement, or through the use of - the DEFAULT keyword. + the DEFAULT key word. @@ -786,18 +786,18 @@ CREATE TABLE tablename ( - Name - Storage Size - Description - Range + Name + Storage Size + Description + Range - money - 4 bytes - currency amount - -21474836.48 to +21474836.47 + money + 4 bytes + currency amount + -21474836.48 to +21474836.47 @@ -843,22 +843,22 @@ CREATE TABLE tablename ( - Name - Description + Name + Description - character varying(n), varchar(n) - variable-length with limit + character varying(n), varchar(n) + variable-length with limit - character(n), char(n) - fixed-length, blank padded + character(n), char(n) + fixed-length, blank padded - text - variable unlimited length + text + variable unlimited length @@ -1013,21 +1013,21 @@ SELECT b, char_length(b) FROM test2; - Name - Storage Size - Description + Name + Storage Size + Description - "char" - 1 byte - single-character internal type + "char" + 1 byte + single-character internal type - name - 64 bytes - internal type for object names + name + 64 bytes + internal type for object names @@ -1224,11 +1224,11 @@ SELECT b, char_length(b) FROM test2; - The SQL standard defines a different binary - string type, called BLOB or BINARY LARGE - OBJECT. The input format is different compared to - bytea, but the provided functions and operators are - mostly the same. + The SQL standard defines a different binary + string type, called BLOB or BINARY LARGE + OBJECT. The input format is different compared to + bytea, but the provided functions and operators are + mostly the same. @@ -1350,10 +1350,10 @@ SELECT b, char_length(b) FROM test2; When timestamp values are stored as double precision floating-point numbers (currently the default), the effective limit of precision - may be less than 6. Timestamp values are stored as seconds + may be less than 6. timestamp values are stored as seconds since 2000-01-01, and microsecond precision is achieved for dates within a few years of 2000-01-01, but the precision degrades for dates further - away. When timestamps are stored as eight-byte integers (a compile-time + away. When timestamp values are stored as eight-byte integers (a compile-time option), microsecond precision is available over the full range of values. However eight-byte integer timestamps have a reduced range of dates from 4713 BC up to 294276 AD. @@ -1452,58 +1452,58 @@ SELECT b, char_length(b) FROM test2; Date Input - - Example - Description - + + Example + Description + - - January 8, 1999 - unambiguous in any datestyle input mode - - - 1999-01-08 - ISO-8601, January 8 in any mode - (recommended format) - - - 1/8/1999 - January 8 in MDY mode; - August 1 in DMY mode - - - 1/18/1999 - January 18 in MDY mode; - rejected in other modes - - - 01/02/03 - January 2, 2003 in MDY mode; - February 1, 2003 in DMY mode; - February 3, 2001 in YMD mode - - - - 19990108 - ISO-8601; January 8, 1999 in any mode - - - 990108 - ISO-8601; January 8, 1999 in any mode - - - 1999.008 - year and day of year - - - J2451187 - Julian day - - - January 8, 99 BC - year 99 before the Common Era - + + January 8, 1999 + unambiguous in any datestyle input mode + + + 1999-01-08 + ISO-8601, January 8 in any mode + (recommended format) + + + 1/8/1999 + January 8 in MDY mode; + August 1 in DMY mode + + + 1/18/1999 + January 18 in MDY mode; + rejected in other modes + + + 01/02/03 + January 2, 2003 in MDY mode; + February 1, 2003 in DMY mode; + February 3, 2001 in YMD mode + + + + 19990108 + ISO-8601; January 8, 1999 in any mode + + + 990108 + ISO-8601; January 8, 1999 in any mode + + + 1999.008 + year and day of year + + + J2451187 + Julian day + + + January 8, 99 BC + year 99 before the Common Era + @@ -1542,96 +1542,96 @@ SELECT b, char_length(b) FROM test2; Time Input - - - Example - Description - - - - - 04:05:06.789 - ISO 8601 - - - 04:05:06 - ISO 8601 - - - 04:05 - ISO 8601 - - - 040506 - ISO 8601 - - - 04:05 AM - same as 04:05; AM does not affect value - - - 04:05 PM - same as 16:05; input hour must be <= 12 - - - 04:05:06.789-8 - ISO 8601 - - - 04:05:06-08:00 - ISO 8601 - - - 04:05-08:00 - ISO 8601 - - - 040506-08 - ISO 8601 - - - 04:05:06 PST - timezone specified by name - - - -
+ + + Example + Description + + + + + 04:05:06.789 + ISO 8601 + + + 04:05:06 + ISO 8601 + + + 04:05 + ISO 8601 + + + 040506 + ISO 8601 + + + 04:05 AM + same as 04:05; AM does not affect value + + + 04:05 PM + same as 16:05; input hour must be <= 12 + + + 04:05:06.789-8 + ISO 8601 + + + 04:05:06-08:00 + ISO 8601 + + + 04:05-08:00 + ISO 8601 + + + 040506-08 + ISO 8601 + + + 04:05:06 PST + time zone specified by name + + + + Time Zone Input - - - Example - Description - - - - - PST - Pacific Standard Time - - - -8:00 - ISO-8601 offset for PST - - - -800 - ISO-8601 offset for PST - - - -8 - ISO-8601 offset for PST - - - zulu - Military abbreviation for GMT - - - z - Short form of zulu - - + + + Example + Description + + + + + PST + Pacific Standard Time + + + -8:00 + ISO-8601 offset for PST + + + -800 + ISO-8601 offset for PST + + + -8 + ISO-8601 offset for PST + + + zulu + Military abbreviation for GMT + + + z + Short form of zulu + +
@@ -1788,55 +1788,55 @@ January 8 04:05:06 1999 PST Special Date/Time Inputs - - - Input String + + + Input String Valid Types - Description - - - - - epoch + Description + + + + + epoch date, timestamp - 1970-01-01 00:00:00+00 (Unix system time zero) - - - infinity + 1970-01-01 00:00:00+00 (Unix system time zero) + + + infinity timestamp - later than all other time stamps - - - -infinity + later than all other time stamps + + + -infinity timestamp - earlier than all other time stamps - - - now + earlier than all other time stamps + + + now date, time, timestamp - current transaction's start time - - - today + current transaction's start time + + + today date, timestamp - midnight today - - - tomorrow + midnight today + + + tomorrow date, timestamp - midnight tomorrow - - - yesterday + midnight tomorrow + + + yesterday date, timestamp - midnight yesterday - - - allballs + midnight yesterday + + + allballs time - 00:00:00.00 UTC - - + 00:00:00.00 UTC + +
@@ -1877,33 +1877,33 @@ January 8 04:05:06 1999 PST Date/Time Output Styles - - Style Specification - Description - Example - + + Style Specification + Description + Example + - - ISO - ISO 8601/SQL standard - 1997-12-17 07:37:16-08 - - - SQL - traditional style - 12/17/1997 07:37:16.00 PST - - - POSTGRES - original style - Wed Dec 17 07:37:16 1997 PST - - - German - regional style - 17.12.1997 07:37:16.00 PST - + + ISO + ISO 8601/SQL standard + 1997-12-17 07:37:16-08 + + + SQL + traditional style + 12/17/1997 07:37:16.00 PST + + + POSTGRES + original style + Wed Dec 17 07:37:16 1997 PST + + + German + regional style + 17.12.1997 07:37:16.00 PST + @@ -1922,28 +1922,28 @@ January 8 04:05:06 1999 PST Date Order Conventions - - DateStyle setting - Input Ordering - Example Output - + + datestyle Setting + Input Ordering + Example Output + - - SQL, DMY - day/month/year - 17/12/1997 15:37:16.00 CET - - - SQL, MDY - month/day/year - 12/17/1997 07:37:16.00 PST - - - Postgres, DMY - day/month/year - Wed 17 Dec 07:37:16 1997 PST - + + SQL, DMY + day/month/year + 17/12/1997 15:37:16.00 CET + + + SQL, MDY + month/day/year + 12/17/1997 07:37:16.00 PST + + + Postgres, DMY + day/month/year + Wed 17 Dec 07:37:16 1997 PST + @@ -2007,22 +2007,22 @@ January 8 04:05:06 1999 PST - Although the date type - does not have an associated time zone, the - time type can. - Time zones in the real world can have no meaning unless - associated with a date as well as a time - since the offset may vary through the year with daylight-saving - time boundaries. + Although the date type + does not have an associated time zone, the + time type can. + Time zones in the real world can have no meaning unless + associated with a date as well as a time + since the offset may vary through the year with daylight-saving + time boundaries. - The default time zone is specified as a constant numeric offset - from UTC. It is not possible to adapt to daylight-saving - time when doing date/time arithmetic across - DST boundaries. + The default time zone is specified as a constant numeric offset + from UTC. It is not possible to adapt to daylight-saving + time when doing date/time arithmetic across + DST boundaries. @@ -2053,32 +2053,32 @@ January 8 04:05:06 1999 PST - The TZ environment variable on the server host - is used by the server as the default time zone, if no other is - specified. + The TZ environment variable on the server host + is used by the server as the default time zone, if no other is + specified. - The timezone configuration parameter can be - set in the file postgresql.conf. + The timezone configuration parameter can be + set in the file postgresql.conf. - The PGTZ environment variable, if set at the - client, is used by libpq - applications to send a SET TIME ZONE - command to the server upon connection. + The PGTZ environment variable, if set at the + client, is used by libpq + applications to send a SET TIME ZONE + command to the server upon connection. - The SQL command SET TIME ZONE - sets the time zone for the session. + The SQL command SET TIME ZONE + sets the time zone for the session. @@ -2227,60 +2227,60 @@ SELECT * FROM test1 WHERE a; - Name - Storage Size - Representation - Description + Name + Storage Size + Representation + Description - point - 16 bytes - Point on the plane - (x,y) + point + 16 bytes + Point on the plane + (x,y) - line - 32 bytes - Infinite line (not fully implemented) - ((x1,y1),(x2,y2)) + line + 32 bytes + Infinite line (not fully implemented) + ((x1,y1),(x2,y2)) - lseg - 32 bytes - Finite line segment - ((x1,y1),(x2,y2)) + lseg + 32 bytes + Finite line segment + ((x1,y1),(x2,y2)) - box - 32 bytes - Rectangular box - ((x1,y1),(x2,y2)) + box + 32 bytes + Rectangular box + ((x1,y1),(x2,y2)) - path - 16+16n bytes - Closed path (similar to polygon) - ((x1,y1),...) + path + 16+16n bytes + Closed path (similar to polygon) + ((x1,y1),...) - path - 16+16n bytes - Open path - [(x1,y1),...] + path + 16+16n bytes + Open path + [(x1,y1),...] - polygon - 40+16n bytes - Polygon (similar to closed path) - ((x1,y1),...) + polygon + 40+16n bytes + Polygon (similar to closed path) + ((x1,y1),...) - circle - 24 bytes - Circle - <(x,y),r> (center and radius) + circle + 24 bytes + Circle + <(x,y),r> (center and radius) @@ -2507,29 +2507,29 @@ SELECT * FROM test1 WHERE a; - Name - Storage Size - Description + Name + Storage Size + Description - cidr - 12 or 24 bytes - IPv4 or IPv6 networks + cidr + 12 or 24 bytes + IPv4 or IPv6 networks - inet - 12 or 24 bytes - IPv4 and IPv6 hosts and networks + inet + 12 or 24 bytes + IPv4 and IPv6 hosts and networks - macaddr - 6 bytes - MAC addresses + macaddr + 6 bytes + MAC addresses @@ -2612,93 +2612,93 @@ SELECT * FROM test1 WHERE a; <type>cidr</> Type Input Examples - - cidr Input - cidr Output - abbrev(cidr) - + + cidr Input + cidr Output + abbrev(cidr) + - - 192.168.100.128/25 - 192.168.100.128/25 - 192.168.100.128/25 - - - 192.168/24 - 192.168.0.0/24 - 192.168.0/24 - - - 192.168/25 - 192.168.0.0/25 - 192.168.0.0/25 - - - 192.168.1 - 192.168.1.0/24 - 192.168.1/24 - - - 192.168 - 192.168.0.0/24 - 192.168.0/24 - - - 128.1 - 128.1.0.0/16 - 128.1/16 - - - 128 - 128.0.0.0/16 - 128.0/16 - - - 128.1.2 - 128.1.2.0/24 - 128.1.2/24 - - - 10.1.2 - 10.1.2.0/24 - 10.1.2/24 - - - 10.1 - 10.1.0.0/16 - 10.1/16 - - - 10 - 10.0.0.0/8 - 10/8 - - - 10.1.2.3/32 - 10.1.2.3/32 + + 192.168.100.128/25 + 192.168.100.128/25 + 192.168.100.128/25 + + + 192.168/24 + 192.168.0.0/24 + 192.168.0/24 + + + 192.168/25 + 192.168.0.0/25 + 192.168.0.0/25 + + + 192.168.1 + 192.168.1.0/24 + 192.168.1/24 + + + 192.168 + 192.168.0.0/24 + 192.168.0/24 + + + 128.1 + 128.1.0.0/16 + 128.1/16 + + + 128 + 128.0.0.0/16 + 128.0/16 + + + 128.1.2 + 128.1.2.0/24 + 128.1.2/24 + + + 10.1.2 + 10.1.2.0/24 + 10.1.2/24 + + + 10.1 + 10.1.0.0/16 + 10.1/16 + + + 10 + 10.0.0.0/8 + 10/8 + + 10.1.2.3/32 - + 10.1.2.3/32 + 10.1.2.3/32 + - 2001:4f8:3:ba::/64 - 2001:4f8:3:ba::/64 - 2001:4f8:3:ba::/64 - + 2001:4f8:3:ba::/64 + 2001:4f8:3:ba::/64 + 2001:4f8:3:ba::/64 + - 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128 - 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128 - 2001:4f8:3:ba:2e0:81ff:fe22:d1f1 - - - ::ffff:1.2.3.0/120 - ::ffff:1.2.3.0/120 + 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128 + 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128 + 2001:4f8:3:ba:2e0:81ff:fe22:d1f1 + + + ::ffff:1.2.3.0/120 + ::ffff:1.2.3.0/120 ::ffff:1.2.3/120 - - - ::ffff:1.2.3.0/128 - ::ffff:1.2.3.0/128 + + ::ffff:1.2.3.0/128 - + ::ffff:1.2.3.0/128 + ::ffff:1.2.3.0/128 + @@ -2715,10 +2715,10 @@ SELECT * FROM test1 WHERE a; - If you do not like the output format for inet or - cidr values, try the functions host, - text, and abbrev. - + If you do not like the output format for inet or + cidr values, try the functions host, + text, and abbrev. +
@@ -2939,62 +2939,62 @@ SELECT * FROM test; - Name - References - Description - Value Example + Name + References + Description + Value Example - oid - any - numeric object identifier - 564182 + oid + any + numeric object identifier + 564182 - regproc - pg_proc - function name - sum + regproc + pg_proc + function name + sum - regprocedure - pg_proc - function with argument types - sum(int4) + regprocedure + pg_proc + function with argument types + sum(int4) - regoper - pg_operator - operator name - + + regoper + pg_operator + operator name + + - regoperator - pg_operator - operator with argument types - *(integer,integer) or -(NONE,integer) + regoperator + pg_operator + operator with argument types + *(integer,integer) or -(NONE,integer) - regclass - pg_class - relation name - pg_type + regclass + pg_class + relation name + pg_type - regtype - pg_type - data type name - integer + regtype + pg_type + data type name + integer @@ -3026,10 +3026,10 @@ SELECT * FROM test; A final identifier type used by the system is tid, or tuple - identifier. This is the data type of the system column + identifier (row identifier). This is the data type of the system column ctid. A tuple ID is a pair (block number, tuple index within block) that identifies the - physical location of the tuple within its table. + physical location of the row within its table. @@ -3099,63 +3099,63 @@ SELECT * FROM test; - Name - Description + Name + Description - any - Indicates that a function accepts any input data type whatever. + any + Indicates that a function accepts any input data type whatever. - anyarray - Indicates that a function accepts any array data type - (see ). + anyarray + Indicates that a function accepts any array data type + (see ). - anyelement - Indicates that a function accepts any data type - (see ). + anyelement + Indicates that a function accepts any data type + (see ). - cstring - Indicates that a function accepts or returns a null-terminated C string. + cstring + Indicates that a function accepts or returns a null-terminated C string. - internal - Indicates that a function accepts or returns a server-internal - data type. + internal + Indicates that a function accepts or returns a server-internal + data type. - language_handler - A procedural language call handler is declared to return language_handler. + language_handler + A procedural language call handler is declared to return language_handler. - record - Identifies a function returning an unspecified row type. + record + Identifies a function returning an unspecified row type. - trigger - A trigger function is declared to return trigger. + trigger + A trigger function is declared to return trigger. - void - Indicates that a function returns no value. + void + Indicates that a function returns no value. - opaque - An obsolete type name that formerly served all the above purposes. + opaque + An obsolete type name that formerly served all the above purposes. diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index 9b2857c1d5..855369dbb6 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -1,4 +1,4 @@ - + Data Definition @@ -230,9 +230,9 @@ DROP TABLE products; The identity (transaction ID) of the inserting transaction for - this tuple. (Note: In this context, a tuple is an individual - state of a row; each update of a row creates a new tuple for the - same logical row.) + this row version. (A row version is an individual state of a + row; each update of a row creates a new row version for the same + logical row.) @@ -260,8 +260,8 @@ DROP TABLE products; The identity (transaction ID) of the deleting transaction, or - zero for an undeleted tuple. It is possible for this column to - be nonzero in a visible tuple: That usually indicates that the + zero for an undeleted row version. It is possible for this column to + be nonzero in a visible row version: That usually indicates that the deleting transaction hasn't committed yet, or that an attempted deletion was rolled back. @@ -289,9 +289,9 @@ DROP TABLE products; - The physical location of the tuple within its table. Note that + The physical location of the row version within its table. Note that although the ctid can be used to - locate the tuple very quickly, a row's + locate the row version very quickly, a row's ctid will change each time it is updated or moved by VACUUM FULL. Therefore ctid is useless as a long-term row @@ -332,7 +332,7 @@ DROP TABLE products; of 232 (4 billion) SQL commands within a single transaction. In practice this limit is not a problem --- note that the limit is on number of - SQL commands, not number of tuples processed. + SQL commands, not number of rows processed. @@ -1047,7 +1047,7 @@ SELECT name, altitude - In some cases you may wish to know which table a particular tuple + In some cases you may wish to know which table a particular row originated from. There is a system column called TABLEOID in each table which can tell you the originating table: diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 96ab3dfd7b..71373bbd19 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -1,5 +1,5 @@ @@ -541,7 +541,7 @@ PostgreSQL documentation indicates double precision. Many of these functions are provided in multiple forms with different argument types. Except where noted, any given form of a function returns the same - datatype as its argument. + data type as its argument. The functions working with double precision data are mostly implemented on top of the host system's C library; accuracy and behavior in boundary cases may therefore vary depending on the host system. @@ -3182,7 +3182,7 @@ substring('foobar' from 'o(.)b') o \f - formfeed, as in C + form feed, as in C @@ -4869,7 +4869,7 @@ substring('foobar' from 'o(.)b') o - In addition to these functions, the SQL OVERLAPS keyword is + In addition to these functions, the SQL OVERLAPS operator is supported: ( start1, end1 ) OVERLAPS ( start2, end2 ) @@ -4877,8 +4877,8 @@ substring('foobar' from 'o(.)b') o This expression yields true when two time periods (defined by their endpoints) overlap, false when they do not overlap. The endpoints - can be specified as pairs of dates, times, or timestamps; or as - a date, time, or timestamp followed by an interval. + can be specified as pairs of dates, times, or time stamps; or as + a date, time, or time stamp followed by an interval. @@ -5471,7 +5471,7 @@ SELECT timeofday(); the intent is to allow a single transaction to have a consistent notion of the current time, so that multiple modifications within the same transaction bear the same - timestamp. timeofday() + time stamp. timeofday() returns the wall-clock time and does advance during transactions. @@ -6043,7 +6043,7 @@ SELECT TIMESTAMP 'now'; hostmask(inet) inet - construct hostmask for network + construct host mask for network hostmask('192.168.23.20/30') 0.0.0.3 diff --git a/doc/src/sgml/indexcost.sgml b/doc/src/sgml/indexcost.sgml index 09eb5234d9..026ad9f9c7 100644 --- a/doc/src/sgml/indexcost.sgml +++ b/doc/src/sgml/indexcost.sgml @@ -1,5 +1,5 @@ @@ -39,7 +39,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/indexcost.sgml,v 2.16 2003/02/08 20:2 The amcostestimate function is given a list of WHERE clauses that have been determined to be usable with the index. It must return estimates of the cost of accessing the index and the selectivity of the WHERE - clauses (that is, the fraction of main-table tuples that will be + clauses (that is, the fraction of main-table rows that will be retrieved during the index scan). For simple cases, nearly all the work of the cost estimator can be done by calling standard routines in the optimizer; the point of having an amcostestimate function is @@ -157,7 +157,7 @@ amcostestimate (Query *root, The index access costs should be computed in the units used by src/backend/optimizer/path/costsize.c: a sequential disk block fetch has cost 1.0, a nonsequential fetch has cost random_page_cost, and - the cost of processing one index tuple should usually be taken as + the cost of processing one index row should usually be taken as cpu_index_tuple_cost (which is a user-adjustable optimizer parameter). In addition, an appropriate multiple of cpu_operator_cost should be charged for any comparison operators invoked during index processing (especially @@ -167,27 +167,27 @@ amcostestimate (Query *root, The access costs should include all disk and CPU costs associated with scanning the index itself, but NOT the costs of retrieving or processing - the main-table tuples that are identified by the index. + the main-table rows that are identified by the index. The start-up cost is the part of the total scan cost that must be expended - before we can begin to fetch the first tuple. For most indexes this can + before we can begin to fetch the first row. For most indexes this can be taken as zero, but an index type with a high start-up cost might want to set it nonzero. The indexSelectivity should be set to the estimated fraction of the main - table tuples that will be retrieved during the index scan. In the case + table rows that will be retrieved during the index scan. In the case of a lossy index, this will typically be higher than the fraction of - tuples that actually pass the given qual conditions. + rows that actually pass the given qual conditions. The indexCorrelation should be set to the correlation (ranging between -1.0 and 1.0) between the index order and the table order. This is used - to adjust the estimate for the cost of fetching tuples from the main + to adjust the estimate for the cost of fetching rows from the main table. @@ -199,7 +199,7 @@ amcostestimate (Query *root, - Estimate and return the fraction of main-table tuples that will be visited + Estimate and return the fraction of main-table rows that will be visited based on the given qual conditions. In the absence of any index-type-specific knowledge, use the standard optimizer function clauselist_selectivity(): @@ -212,10 +212,10 @@ amcostestimate (Query *root, - Estimate the number of index tuples that will be visited during the + Estimate the number of index rows that will be visited during the scan. For many index types this is the same as indexSelectivity times - the number of tuples in the index, but it might be more. (Note that the - index's size in pages and tuples is available from the IndexOptInfo struct.) + the number of rows in the index, but it might be more. (Note that the + index's size in pages and rows is available from the IndexOptInfo struct.) @@ -234,7 +234,7 @@ amcostestimate (Query *root, /* * Our generic assumption is that the index pages will be read * sequentially, so they have cost 1.0 each, not random_page_cost. - * Also, we charge for evaluation of the indexquals at each index tuple. + * Also, we charge for evaluation of the indexquals at each index row. * All the costs are assumed to be paid incrementally during the scan. */ cost_qual_eval(&index_qual_cost, indexQuals); diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index 72141f8c73..787ca5eea2 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -1,4 +1,4 @@ - + <![%standalone-include[<productname>PostgreSQL</>]]> @@ -915,8 +915,9 @@ JAVACMD=$JAVA_HOME/bin/java <term><option>--enable-thread-safety</option></term> <listitem> <para> - Allow separate libpq and ecpg threads to safely control their - private connection handles. + Allow separate threads in <application>libpq</application> + and <application>ECPG</application> programs to safely + control their private connection handles. </para> </listitem> </varlistentry> diff --git a/doc/src/sgml/jdbc.sgml b/doc/src/sgml/jdbc.sgml index bff52a9668..255b6d6b9a 100644 --- a/doc/src/sgml/jdbc.sgml +++ b/doc/src/sgml/jdbc.sgml @@ -1,5 +1,5 @@ <!-- -$Header: /cvsroot/pgsql/doc/src/sgml/Attic/jdbc.sgml,v 1.49 2003/09/20 20:12:05 tgl Exp $ +$Header: /cvsroot/pgsql/doc/src/sgml/Attic/jdbc.sgml,v 1.50 2003/11/01 01:56:29 petere Exp $ --> <chapter id="jdbc"> @@ -572,7 +572,7 @@ st.close(); <sect1 id="jdbc-callproc"> <title>Calling Stored Functions - PostgreSQL's jdbc driver fully + PostgreSQL's JDBC driver fully supports calling PostgreSQL stored functions. @@ -636,7 +636,7 @@ upperProc.close(); When calling a function that returns a refcursor you must cast the return type - of getObject to + of getObject to a ResultSet @@ -953,7 +953,7 @@ ps.close(); PostgreSQL is an extensible database - system. You can add your own functions to the backend, which can + system. You can add your own functions to the server, which can then be called from queries, or even add your own data types. As these are facilities unique to PostgreSQL, we support them from Java, with a set of extension @@ -999,7 +999,7 @@ public class PGConnection public Fastpath getFastpathAPI() throws SQLException - This returns the Fastpath API for the + This returns the fast-path API for the current connection. It is primarily used by the Large Object API. @@ -1017,15 +1017,15 @@ Fastpath fp = ((org.postgresql.PGConnection)myconn).getFastpathAPI(); Returns: - Fastpath object allowing access to functions on the - PostgreSQL backend. + Fastpath object allowing access to functions on the + PostgreSQL server. Throws: - SQLException by Fastpath when initializing for first time + SQLException by Fastpath when initializing for first time @@ -1110,7 +1110,7 @@ java.lang.Object Fastpath is an API that exists within the libpq C interface, and allows a client machine - to execute a function on the database backend. Most client code + to execute a function on the database server. Most client code will not need to use this method, but it is provided because the Large Object API uses it. @@ -1134,7 +1134,7 @@ Fastpath fp = ((org.postgresql.PGConnection)conn).getFastpathAPI(); the getFastpathAPI() is an extension method, not part of JDBC. Once you have a Fastpath instance, you can use the - fastpath() methods to execute a backend + fastpath() methods to execute a server function. @@ -1156,7 +1156,7 @@ public Object fastpath(int fnid, FastpathArg args[]) throws SQLException - Send a function call to the PostgreSQL backend. + Send a function call to the PostgreSQL server. @@ -1166,7 +1166,7 @@ public Object fastpath(int fnid, resulttype - True if the result is an integer, false for other results - args - FastpathArguments to pass to fastpath + args - FastpathArguments to pass to fast-path call @@ -1186,7 +1186,7 @@ public Object fastpath(String name, FastpathArg args[]) throws SQLException - Send a function call to the PostgreSQL backend by name. + Send a function call to the PostgreSQL server by name. @@ -1194,7 +1194,7 @@ public Object fastpath(String name, The mapping for the procedure name to function id needs to exist, usually to an earlier call to addfunction(). This is the preferred method to call, as function id's can/may change - between versions of the backend. For an example of how this + between versions of the server. For an example of how this works, refer to org.postgresql.LargeObject @@ -1206,7 +1206,7 @@ public Object fastpath(String name, resulttype - True if the result is an integer, false for other results - args - FastpathArguments to pass to fastpath + args - FastpathArguments to pass to fast-path call
@@ -1381,7 +1381,7 @@ java.lang.Object - Each fastpath call requires an array of arguments, the number and + Each fast-path call requires an array of arguments, the number and type dependent on the function being called. This class implements methods needed to provide this capability. @@ -1700,7 +1700,7 @@ java.lang.Object Cloneable This implements a line consisting of two points. Currently line is -not yet implemented in the backend, but this class ensures that when +not yet implemented in the server, but this class ensures that when it's done were ready for it. Variables diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 1f81223b50..4431df7911 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -1,5 +1,5 @@ @@ -92,7 +92,7 @@ PGconn *PQconnectdb(const char *conninfo); - The currently recognized parameter keywords are: + The currently recognized parameter key words are: @@ -307,7 +307,7 @@ PGconn *PQsetdbLogin(const char *pghost, This is the predecessor of PQconnectdb with a fixed set of parameters. It has the same functionality except that the - missing parameters will always take on default values. Write NULL or an + missing parameters will always take on default values. Write NULL or an empty string for any one of the fixed parameters that is to be defaulted. @@ -849,7 +849,7 @@ const char *PQparameterStatus(const PGconn *conn, const char *paramName); Certain parameter values are reported by the server automatically at connection startup or whenever their values change. PQparameterStatus can be used to interrogate these settings. -It returns the current value of a parameter if known, or NULL if the parameter +It returns the current value of a parameter if known, or NULL if the parameter is not known. @@ -890,7 +890,7 @@ or zero (connection bad). This will not change after connection startup is complete, but it could theoretically change during a reset. The 3.0 protocol will normally be used when communicating with PostgreSQL 7.4 or later servers; pre-7.4 servers support -only protocol 2.0. (Protocol 1.0 is obsolete and not supported by libpq.) +only protocol 2.0. (Protocol 1.0 is obsolete and not supported by libpq.) @@ -1071,23 +1071,23 @@ as $1, $2, etc. nParams is the number of parameters supplied; it is the length of the arrays paramTypes[], paramValues[], paramLengths[], and paramFormats[]. (The -array pointers may be NULL when nParams is zero.) -paramTypes[] specifies, by OID, the datatypes to be assigned to -the parameter symbols. If paramTypes is NULL, or any particular -element in the array is zero, the backend assigns a datatype to the parameter +array pointers may be NULL when nParams is zero.) +paramTypes[] specifies, by OID, the data types to be assigned to +the parameter symbols. If paramTypes is NULL, or any particular +element in the array is zero, the server assigns a data type to the parameter symbol in the same way it would do for an untyped literal string. paramValues[] specifies the actual values of the parameters. -A NULL pointer in this array means the corresponding parameter is NULL; +A null pointer in this array means the corresponding parameter is null; otherwise the pointer points to a zero-terminated text string (for text -format) or binary data in the format expected by the backend (for binary +format) or binary data in the format expected by the server (for binary format). paramLengths[] specifies the actual data lengths of -binary-format parameters. It is ignored for NULL parameters and text-format -parameters. The array pointer may be NULL when there are no binary +binary-format parameters. It is ignored for null parameters and text-format +parameters. The array pointer may be null when there are no binary parameters. paramFormats[] specifies whether parameters are text (put a zero in the array) or binary (put a one in the array). If the array pointer is -NULL then all parameters are presumed to be text. +null then all parameters are presumed to be text. resultFormat is zero to obtain results in text format, or one to obtain results in binary format. (There is not currently a provision to obtain different result columns in different formats, although that is @@ -1544,7 +1544,7 @@ char *PQfname(const PGresult *res, -NULL is returned if the column number is out of range. +NULL is returned if the column number is out of range. @@ -1761,15 +1761,15 @@ char* PQgetvalue(const PGresult *res, For data in text format, the value returned by PQgetvalue is a null-terminated character string representation of the field value. For data in binary format, the value is in the binary -representation determined by the datatype's typsend and +representation determined by the data type's typsend and typreceive functions. (The value is actually followed by a zero byte in this case too, but that is not ordinarily useful, since the value is likely to contain embedded nulls.) -An empty string is returned if the field value is NULL. See -PQgetisnull to distinguish NULLs from empty-string values. +An empty string is returned if the field value is null. See +PQgetisnull to distinguish null values from empty-string values. @@ -2123,7 +2123,7 @@ void PQfreemem(void *ptr); PQunescapeBytea, and PQnotifies. It is needed by Win32, which can not free memory across - DLL's, unless multithreaded DLL's (/MD in VC6) are used. + DLLs, unless multithreaded DLLs ( in VC6) are used. On other platforms it is the same as free(). @@ -2547,13 +2547,13 @@ typedef struct { parameters to be passed to the function; they must match the declared function argument list. When the isint field of a parameter - struct is true, + structure is true, the u.integer value is sent to the server as an integer of the indicated length (this must be 1, 2, or 4 bytes); proper byte-swapping occurs. When isint is false, the indicated number of bytes at *u.ptr are sent with no processing; the data must be in the format expected by the server for - binary transmission of the function's argument datatype. + binary transmission of the function's argument data type. result_buf is the buffer in which to place the return value. The caller must have allocated sufficient space to store the return value. (There is no check!) @@ -2577,7 +2577,7 @@ caller is responsible for freeing the PGresult with -Note that it is not possible to handle NULL arguments, NULL results, nor +Note that it is not possible to handle null arguments, null results, nor set-valued results when using this interface. @@ -2646,8 +2646,8 @@ do not represent separate allocations. In PostgreSQL 6.4 and later, - the be_pid is that of the notifying backend process, - whereas in earlier versions it was always the PID of your own backend process. + the be_pid is that of the notifying server process, + whereas in earlier versions it was always the PID of your own server process. @@ -2724,7 +2724,7 @@ if any notifications came in during the processing of the command. PQexec in a string that could contain additional commands, the application must continue fetching results via PQgetResult after completing the COPY - sequence. Only when PQgetResult returns NULL is it certain + sequence. Only when PQgetResult returns NULL is it certain that the PQexec command string is done and it is safe to issue more commands. @@ -2816,15 +2816,15 @@ int PQputCopyData(PGconn *conn, Transmits the COPY data in the specified buffer, of length nbytes, to the server. The result is 1 if the data was sent, zero if it was not sent because the attempt would block (this case is only -possible if the connection is in nonblock mode), or -1 if an error occurred. +possible if the connection is in nonblocking mode), or -1 if an error occurred. (Use PQerrorMessage to retrieve details if the return value is -1. If the value is zero, wait for write-ready and try again.) -The application may divide the COPY datastream into bufferloads of any -convenient size. Bufferload boundaries have no semantic significance when -sending. The contents of the datastream must match the data format expected +The application may divide the COPY data stream into buffer loads of any +convenient size. Buffer-load boundaries have no semantic significance when +sending. The contents of the data stream must match the data format expected by the COPY command; see for details. @@ -2844,7 +2844,7 @@ int PQputCopyEnd(PGconn *conn, Ends the COPY_IN operation successfully if errormsg -is NULL. If errormsg is not NULL then the COPY +is NULL. If errormsg is not NULL then the COPY is forced to fail, with the string pointed to by errormsg used as the error message. (One should not assume that this exact error message will come back from the server, however, as the server might have @@ -2855,7 +2855,7 @@ option to force failure does not work when using pre-3.0-protocol connections.) The result is 1 if the termination data was sent, zero if it was not sent because the attempt would block (this case is only -possible if the connection is in nonblock mode), or -1 if an error occurred. +possible if the connection is in nonblocking mode), or -1 if an error occurred. (Use PQerrorMessage to retrieve details if the return value is -1. If the value is zero, wait for write-ready and try again.) @@ -2900,9 +2900,9 @@ Attempts to obtain another row of data from the server during a COPY. Data is always returned one data row at a time; if only a partial row is available, it is not returned. Successful return of a data row involves allocating a chunk of memory to hold the data. The -buffer parameter must be non-NULL. *buffer -is set to point to the allocated memory, or to NULL in cases where no -buffer is returned. A non-NULL result buffer must be freed using +buffer parameter must be non-NULL. *buffer +is set to point to the allocated memory, or to NULL in cases where no +buffer is returned. A non-NULL result buffer must be freed using PQfreemem when no longer needed. @@ -3053,7 +3053,7 @@ int PQputline(PGconn *conn, -The COPY datastream sent by a series of calls to +The COPY data stream sent by a series of calls to PQputline has the same format as that returned by PQgetlineAsync, except that applications are not obliged to send exactly one data row per PQputline diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml index 474cde427e..4cdac668f0 100644 --- a/doc/src/sgml/lobj.sgml +++ b/doc/src/sgml/lobj.sgml @@ -1,5 +1,5 @@ @@ -324,10 +324,10 @@ SELECT lo_export(image.raster, '/tmp/motd') FROM image -These functions read and write files in the server's filesystem, using the +These functions read and write files in the server's file system, using the permissions of the database's owning user. Therefore, their use is restricted to superusers. (In contrast, the client-side import and export functions -read and write files in the client's filesystem, using the permissions of +read and write files in the client's file system, using the permissions of the client program. Their use is not restricted.) diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml index e9e17ea2f4..6a42552b8f 100644 --- a/doc/src/sgml/maintenance.sgml +++ b/doc/src/sgml/maintenance.sgml @@ -1,5 +1,5 @@ @@ -101,13 +101,13 @@ $Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.26 2003/10/09 19:05:09 In normal PostgreSQL operation, an UPDATE or DELETE of a row does not - immediately remove the old tuple (version of the row). + immediately remove the old version of the row. This approach is necessary to gain the benefits of multiversion - concurrency control (see ): the tuple + concurrency control (see ): the row version must not be deleted while it is still potentially visible to other - transactions. But eventually, an outdated or deleted tuple is no + transactions. But eventually, an outdated or deleted row version is no longer of interest to any transaction. The space it occupies must be - reclaimed for reuse by new tuples, to avoid infinite growth of disk + reclaimed for reuse by new rows, to avoid infinite growth of disk space requirements. This is done by running VACUUM. @@ -125,7 +125,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.26 2003/10/09 19:05:09 The standard form of VACUUM is best used with the goal of maintaining a fairly level steady-state usage of disk space. The standard - form finds old tuples and makes their space available for re-use within + form finds old row versions and makes their space available for re-use within the table, but it does not try very hard to shorten the table file and return disk space to the operating system. If you need to return disk space to the operating system you can use VACUUM FULL --- @@ -147,7 +147,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.26 2003/10/09 19:05:09 VACUUM FULL is recommended for cases where you know you have - deleted the majority of tuples in a table, so that the steady-state size + deleted the majority of rows in a table, so that the steady-state size of the table can be shrunk substantially with VACUUM FULL's more aggressive approach. @@ -242,7 +242,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.26 2003/10/09 19:05:09 PostgreSQL's MVCC transaction semantics depend on being able to compare transaction ID (XID) - numbers: a tuple with an insertion XID greater than the current + numbers: a row version with an insertion XID greater than the current transaction's XID is in the future and should not be visible to the current transaction. But since transaction IDs have limited size (32 bits at this writing) a cluster that runs for a long time (more @@ -283,29 +283,29 @@ $Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.26 2003/10/09 19:05:09 that for every normal XID, there are two billion XIDs that are older and two billion that are newer; another way to say it is that the normal XID space is circular with no - endpoint. Therefore, once a tuple has been created with a particular - normal XID, the tuple will appear to be in the past for + endpoint. Therefore, once a row version has been created with a particular + normal XID, the row version will appear to be in the past for the next two billion transactions, no matter which normal XID we are - talking about. If the tuple still exists after more than two billion + talking about. If the row version still exists after more than two billion transactions, it will suddenly appear to be in the future. To - prevent data loss, old tuples must be reassigned the XID + prevent data loss, old row versions must be reassigned the XID FrozenXID sometime before they reach the two-billion-transactions-old mark. Once they are assigned this special XID, they will appear to be in the past to all normal transactions regardless of wraparound issues, and so such - tuples will be good until deleted, no matter how long that is. This + row versions will be good until deleted, no matter how long that is. This reassignment of XID is handled by VACUUM. VACUUM's normal policy is to reassign FrozenXID - to any tuple with a normal XID more than one billion transactions in the + to any row version with a normal XID more than one billion transactions in the past. This policy preserves the original insertion XID until it is not - likely to be of interest anymore. (In fact, most tuples will probably + likely to be of interest anymore. (In fact, most row versions will probably live and die without ever being frozen.) With this policy, the maximum safe interval between VACUUM runs on any table is exactly one billion transactions: if you wait longer, it's possible - that a tuple that was not quite old enough to be reassigned last time + that a row version that was not quite old enough to be reassigned last time is now more than two billion transactions old and has wrapped around into the future --- i.e., is lost to you. (Of course, it'll reappear after another two billion transactions, but that's no help.) @@ -355,10 +355,10 @@ VACUUM VACUUM with the FREEZE option uses a more - aggressive freezing policy: tuples are frozen if they are old enough + aggressive freezing policy: row versions are frozen if they are old enough to be considered good by all open transactions. In particular, if a VACUUM FREEZE is performed in an otherwise-idle - database, it is guaranteed that all tuples in that + database, it is guaranteed that all row versions in that database will be frozen. Hence, as long as the database is not modified in any way, it will not need subsequent vacuuming to avoid transaction ID wraparound problems. This technique is used by @@ -422,13 +422,16 @@ VACUUM The simplest production-grade approach to managing log output is to - send it all to syslog and let syslog - deal with file rotation. To do this, set the configurations parameter - syslog to 2 (to log to syslog only) in - postgresql.conf. Then you can send a SIGHUP - signal to the syslog daemon whenever you want to force it - to start writing a new log file. If you want to automate log rotation, - the logrotate program can be configured to work with log files from syslog. + send it all to syslog and let + syslog deal with file rotation. To do this, set the + configurations parameter syslog to 2 (to log to + syslog only) in postgresql.conf. Then + you can send a SIGHUP signal to the + syslog daemon whenever you want to force it to + start writing a new log file. If you want to automate log + rotation, the logrotate program can be + configured to work with log files from + syslog. diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 1ea6101a80..2ba07811ef 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -1,5 +1,5 @@ @@ -237,8 +237,8 @@ postgres: user database host pg_stat_all_tables For each table in the current database, total numbers of - sequential and index scans, total numbers of tuples returned by - each type of scan, and totals of tuple insertions, updates, + sequential and index scans, total numbers of rows returned by + each type of scan, and totals of row insertions, updates, and deletions. @@ -257,9 +257,9 @@ postgres: user database host pg_stat_all_indexes For each index in the current database, the total number - of index scans that have used that index, the number of index tuples - read, and the number of successfully fetched heap tuples. (This may - be less when there are index entries pointing to expired heap tuples.) + of index scans that have used that index, the number of index rows + read, and the number of successfully fetched heap rows. (This may + be less when there are index entries pointing to expired heap rows.) @@ -441,8 +441,8 @@ postgres: user database host pg_stat_get_tuples_returned(oid) bigint - Number of tuples read by sequential scans when argument is a table, - or number of index tuples read when argument is an index + Number of rows read by sequential scans when argument is a table, + or number of index rows read when argument is an index @@ -450,7 +450,7 @@ postgres: user database host pg_stat_get_tuples_fetched(oid) bigint - Number of valid (unexpired) table tuples fetched by sequential scans + Number of valid (unexpired) table rows fetched by sequential scans when argument is a table, or fetched by index scans using this index when argument is an index @@ -460,7 +460,7 @@ postgres: user database host pg_stat_get_tuples_inserted(oid) bigint - Number of tuples inserted into table + Number of rows inserted into table @@ -468,7 +468,7 @@ postgres: user database host pg_stat_get_tuples_updated(oid) bigint - Number of tuples updated in table + Number of rows updated in table @@ -476,7 +476,7 @@ postgres: user database host pg_stat_get_tuples_deleted(oid) bigint - Number of tuples deleted from table + Number of rows deleted from table diff --git a/doc/src/sgml/nls.sgml b/doc/src/sgml/nls.sgml index 634c82e90b..d857b66bb5 100644 --- a/doc/src/sgml/nls.sgml +++ b/doc/src/sgml/nls.sgml @@ -1,5 +1,5 @@ @@ -35,7 +35,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/nls.sgml,v 1.6 2003/05/19 21:38:23 tgl Exp this is only in the unlikely event that you do not want to try out your translated messages. When you configure your source tree, be sure to use the option. This will - also check for the libintl library and the + also check for the libintl library and the msgfmt program, which all end users will need anyway. To try out your work, follow the applicable portions of the installation instructions. @@ -49,7 +49,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/nls.sgml,v 1.6 2003/05/19 21:38:23 tgl Exp implementation. Later, we will try to arrange it so that if you use a packaged source distribution, you won't need xgettext. (From CVS, you will still need - it.) GNU gettext 0.10.36 or later is currently recommended. + it.) GNU Gettext 0.10.36 or later is currently recommended. @@ -374,7 +374,7 @@ fprintf(stderr, gettext("panic level %d\n"), lvl); - CATALOG_NAME + CATALOG_NAME @@ -385,7 +385,7 @@ fprintf(stderr, gettext("panic level %d\n"), lvl); - AVAIL_LANGUAGES + AVAIL_LANGUAGES @@ -395,7 +395,7 @@ fprintf(stderr, gettext("panic level %d\n"), lvl); - GETTEXT_FILES + GETTEXT_FILES @@ -411,7 +411,7 @@ fprintf(stderr, gettext("panic level %d\n"), lvl); - GETTEXT_TRIGGERS + GETTEXT_TRIGGERS diff --git a/doc/src/sgml/page.sgml b/doc/src/sgml/page.sgml index 6fea43df87..03c81f76ba 100644 --- a/doc/src/sgml/page.sgml +++ b/doc/src/sgml/page.sgml @@ -1,5 +1,5 @@ @@ -27,7 +27,7 @@ In the following explanation, a is assumed to contain 8 bits. In addition, the term item refers to an individual data value that is stored on a page. In a table, -an item is a tuple (row); in an index, an item is an index entry. +an item is a row; in an index, an item is an index entry. @@ -65,7 +65,7 @@ free space pointers. Free space -The unallocated space. All new tuples are allocated from here, generally from the end. +The unallocated space. All new rows are allocated from here, generally from the end. @@ -162,7 +162,7 @@ data. Empty in ordinary tables. and a version indicator. Beginning with PostgreSQL 7.3 the version number is 1; prior releases used version number 0. (The basic page layout and header format - has not changed, but the layout of heap tuple headers has.) The page size + has not changed, but the layout of heap row headers has.) The page size is basically only present as a cross-check; there is no support for having more than one page size in an installation. @@ -209,12 +209,12 @@ data. Empty in ordinary tables. - All table tuples are structured the same way. There is a fixed-size + All table rows are structured the same way. There is a fixed-size header (occupying 23 bytes on most machines), followed by an optional null bitmap, an optional object ID field, and the user data. The header is detailed in . The actual user data - (fields of the tuple) begins at the offset indicated by + (columns of the row) begins at the offset indicated by t_hoff, which must always be a multiple of the MAXALIGN distance for the platform. The null bitmap is @@ -274,13 +274,13 @@ data. Empty in ordinary tables. t_xvac TransactionId 4 bytes - XID for VACUUM operation moving tuple + XID for VACUUM operation moving row version t_ctid ItemPointerData 6 bytes - current TID of this or newer tuple + current TID of this or newer row version t_natts diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index d04f5ce89d..eaeb6c5960 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -1,5 +1,5 @@ @@ -614,7 +614,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; from_collapse_limit (so that explicit joins and subselects act similarly) or set join_collapse_limit to 1 (if you want to control join order with explicit joins). But you might set them - differently if you are trying to fine-tune the tradeoff between planning + differently if you are trying to fine-tune the trade off between planning time and run time. diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index 8db8029cf9..3ea9461452 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -1,5 +1,5 @@ @@ -180,11 +180,11 @@ END; - Supported Argument and Result Datatypes + Supported Argument and Result Data Types Functions written in PL/pgSQL can accept - as arguments any scalar or array datatype supported by the server, + as arguments any scalar or array data type supported by the server, and they can return a result of any of these types. They can also accept or return any composite type (row type) specified by name. It is also possible to declare a PL/pgSQL @@ -197,14 +197,14 @@ END; PL/pgSQL functions may also be declared to accept and return the polymorphic types anyelement and anyarray. The actual - datatypes handled by a polymorphic function can vary from call to + data types handled by a polymorphic function can vary from call to call, as discussed in . An example is shown in . PL/pgSQL functions can also be declared to return - a set, or table, of any datatype they can return a single + a set, or table, of any data type they can return a single instance of. Such a function generates its output by executing RETURN NEXT for each desired element of the result set. @@ -563,7 +563,7 @@ END; When the return type of a PL/pgSQL function is declared as a polymorphic type (anyelement or anyarray), a special parameter $0 - is created. Its datatype is the actual return type of the function, + is created. Its data type is the actual return type of the function, as deduced from the actual input types (see ). This allows the function to access its actual return type @@ -571,7 +571,7 @@ END; $0 is initialized to NULL and can be modified by the function, so it can be used to hold the return value if desired, though that is not required. $0 can also be - given an alias. For example, this function works on any datatype + given an alias. For example, this function works on any data type that has a + operator: CREATE FUNCTION add_three_values(anyelement, anyelement, anyelement) diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index e3f7af3141..54983bdb74 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -1,4 +1,4 @@ - + Frontend/Backend Protocol @@ -164,7 +164,7 @@ Formats and Format Codes - Data of a particular datatype might be transmitted in any of several + Data of a particular data type might be transmitted in any of several different formats. As of PostgreSQL 7.4 the only supported formats are text and binary, but the protocol makes provision for future extensions. The desired @@ -178,7 +178,7 @@ The text representation of values is whatever strings are produced and accepted by the input/output conversion functions for the - particular datatype. In the transmitted representation, there is + particular data type. In the transmitted representation, there is no trailing null character; the frontend must add one to received values if it wants to process them as C strings. (The text format does not allow embedded nulls, by the way.) @@ -186,9 +186,9 @@ Binary representations for integers use network byte order (most - significant byte first). For other datatypes consult the documentation + significant byte first). For other data types consult the documentation or source code to learn about the binary representation. Keep in mind - that binary representations for complex datatypes may change across + that binary representations for complex data types may change across server versions; the text format is usually the more portable choice. @@ -562,7 +562,7 @@ The response to a SELECT query (or other queries that - return rowsets, such as EXPLAIN or SHOW) + return row sets, such as EXPLAIN or SHOW) normally consists of RowDescription, zero or more DataRow messages, and then CommandComplete. COPY to or from the frontend invokes special protocol @@ -629,12 +629,12 @@ In the extended protocol, the frontend first sends a Parse message, which contains a textual query string, optionally some information - about datatypes of parameter placeholders, and the + about data types of parameter placeholders, and the name of a destination prepared-statement object (an empty string selects the unnamed prepared statement). The response is - either ParseComplete or ErrorResponse. Parameter datatypes may be + either ParseComplete or ErrorResponse. Parameter data types may be specified by OID; if not given, the parser attempts to infer the - datatypes in the same way as it would do for untyped literal string + data types in the same way as it would do for untyped literal string constants. @@ -701,7 +701,7 @@ unnamed portal) and a maximum result-row count (zero meaning fetch all rows). The result-row count is only meaningful for portals - containing commands that return rowsets; in other cases the command is + containing commands that return row sets; in other cases the command is always executed to completion, and the row count is ignored. The possible responses to Execute are the same as those described above for queries @@ -995,7 +995,7 @@ SET SQL command executed by the frontend, and this case is effectively synchronous --- but it is also possible for parameter status changes to occur because the administrator changed a configuration - file and then SIGHUP'd the postmaster. Also, if a SET command is + file and then sent the SIGHUP signal to the postmaster. Also, if a SET command is rolled back, an appropriate ParameterStatus message will be generated to report the current effective value. @@ -1450,7 +1450,7 @@ AuthenticationCleartextPassword (B) - Specifies that a cleartext password is required. + Specifies that a clear-text password is required. @@ -2126,9 +2126,9 @@ CopyData (F & B) - Data that forms part of a COPY datastream. Messages sent + Data that forms part of a COPY data stream. Messages sent from the backend will always correspond to single data rows, - but messages sent by frontends may divide the datastream + but messages sent by frontends may divide the data stream arbitrarily. @@ -3094,7 +3094,7 @@ ParameterDescription (B) - Specifies the object ID of the parameter datatype. + Specifies the object ID of the parameter data type. @@ -3213,7 +3213,7 @@ Parse (F) - The number of parameter datatypes specified + The number of parameter data types specified (may be zero). Note that this is not an indication of the number of parameters that might appear in the query string, only the number that the frontend wants to @@ -3230,7 +3230,7 @@ Parse (F) - Specifies the object ID of the parameter datatype. + Specifies the object ID of the parameter data type. Placing a zero here is equivalent to leaving the type unspecified. @@ -3532,7 +3532,7 @@ RowDescription (B) - The object ID of the field's datatype. + The object ID of the field's data type. @@ -3542,7 +3542,7 @@ RowDescription (B) - The datatype size (see pg_type.typlen). + The data type size (see pg_type.typlen). Note that negative values denote variable-width types. diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml index 2528154288..276bfd6323 100644 --- a/doc/src/sgml/queries.sgml +++ b/doc/src/sgml/queries.sgml @@ -1,4 +1,4 @@ - + Queries @@ -132,7 +132,7 @@ FROM table_reference , table_r When a table reference names a table that is the supertable of a table inheritance hierarchy, the table reference produces rows of not only that table but all of its subtable successors, unless the - keyword ONLY precedes the table name. However, the + key word ONLY precedes the table name. However, the reference produces only the columns that appear in the named table --- any columns added in subtables are ignored. @@ -1244,12 +1244,12 @@ SELECT a AS b FROM table1 ORDER BY a; determined with the > operator. - Actually, PostgreSQL uses the default btree - operator class for the column's datatype to determine the sort + Actually, PostgreSQL uses the default B-tree + operator class for the column's data type to determine the sort ordering for ASC and DESC. Conventionally, - datatypes will be set up so that the < and + data types will be set up so that the < and > operators correspond to this sort ordering, - but a user-defined datatype's designer could choose to do something + but a user-defined data type's designer could choose to do something different. diff --git a/doc/src/sgml/query.sgml b/doc/src/sgml/query.sgml index 26ffb06bd7..abea586bf0 100644 --- a/doc/src/sgml/query.sgml +++ b/doc/src/sgml/query.sgml @@ -1,5 +1,5 @@ @@ -155,7 +155,7 @@ CREATE TABLE weather ( and a rich set of geometric types. PostgreSQL can be customized with an arbitrary number of user-defined data types. Consequently, type - names are not syntactical keywords, except where required to + names are not syntactical key words, except where required to support special cases in the SQL standard. diff --git a/doc/src/sgml/ref/alter_sequence.sgml b/doc/src/sgml/ref/alter_sequence.sgml index 421ad0fc0d..ecfa057d86 100644 --- a/doc/src/sgml/ref/alter_sequence.sgml +++ b/doc/src/sgml/ref/alter_sequence.sgml @@ -1,5 +1,5 @@ @@ -49,7 +49,7 @@ ALTER SEQUENCE name [ INCREMENT [ B name - The name (optionally schema-qualified) of a sequence to be altered. + The name (optionally schema-qualified) of a sequence to be altered. @@ -58,42 +58,43 @@ ALTER SEQUENCE name [ INCREMENT [ B increment - The - - clause is optional. A positive value will make an - ascending sequence, a negative one a descending sequence. - If unspecified, the old increment value will be maintained. + The clause INCREMENT BY increment is + optional. A positive value will make an ascending sequence, a + negative one a descending sequence. If unspecified, the old + increment value will be maintained. minvalue - NO MINVALUE + NO MINVALUE - The optional clause - determines the minimum value - a sequence can generate. If is specified, - the defaults of 1 and -2^63-1 for ascending and descending sequences, respectively, will be used. If neither option is specified, the current minimum - value will be maintained. + The optional clause MINVALUE minvalue determines + the minimum value a sequence can generate. If NO + MINVALUE is specified, the defaults of 1 and + -263-1 for ascending and descending sequences, + respectively, will be used. If neither option is specified, + the current minimum value will be maintained. maxvalue - NO MAXVALUE + NO MAXVALUE - The optional clause - determines the maximum value for the sequence. If - is specified, the defaults are 2^63-1 and -1 for - ascending and descending sequences, respectively, will be used. If - neither option is specified, the current maximum value will be - maintained. + The optional clause MAXVALUE maxvalue determines + the maximum value for the sequence. If NO + MAXVALUE is specified, the defaults are + 263-1 and -1 for ascending and descending + sequences, respectively, will be used. If neither option is + specified, the current maximum value will be maintained. @@ -102,9 +103,9 @@ ALTER SEQUENCE name [ INCREMENT [ B start - The optional - clause changes the current value of the sequence. + The optional clause RESTART WITH start changes the + current value of the sequence. @@ -113,11 +114,12 @@ ALTER SEQUENCE name [ INCREMENT [ B cache - The option - enables sequence numbers to be preallocated - and stored in memory for faster access. The minimum - value is 1 (only one value can be generated at a time, i.e., no cache). - If unspecified, the old cache value will be maintained. + The clause CACHE cache enables + sequence numbers to be preallocated and stored in memory for + faster access. The minimum value is 1 (only one value can be + generated at a time, i.e., no cache). If unspecified, the old + cache value will be maintained. @@ -126,32 +128,33 @@ ALTER SEQUENCE name [ INCREMENT [ B CYCLE - The optional keyword may be used to enable - the sequence to wrap around when the - maxvalue or - minvalue has been - reached by - an ascending or descending sequence respectively. If the limit is - reached, the next number generated will be the - minvalue or - maxvalue, - respectively. + The optional CYCLE key word may be used to enable + the sequence to wrap around when the + maxvalue or + minvalue has been + reached by + an ascending or descending sequence respectively. If the limit is + reached, the next number generated will be the + minvalue or + maxvalue, + respectively. - - NO CYCLE - - - If the optional keyword is specified, any - calls to nextval after the sequence has reached - its maximum value will return an error. If neither - or are specified, - the old cycle behaviour will be maintained. - - - + + NO CYCLE + + + If the optional NO CYCLE key word is + specified, any calls to nextval after the + sequence has reached its maximum value will return an error. + If neither CYCLE or NO + CYCLE are specified, the old cycle behaviour will be + maintained. + + + @@ -161,10 +164,10 @@ ALTER SEQUENCE name [ INCREMENT [ B Restart a sequence called serial, at 105: - - + ALTER SEQUENCE serial RESTART WITH 105; - + + diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index b558731137..099f0263f6 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -1,5 +1,5 @@ @@ -182,8 +182,8 @@ and table_constraint is: The LIKE clause specifies a table from which - the new table automatically inherits all column names, their datatypes, and - NOT NULL constraints. + the new table automatically inherits all column names, their data types, and + not-null constraints. Unlike INHERITS, the new table and inherited table diff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml index 00a158ce51..da670a182b 100644 --- a/doc/src/sgml/ref/create_type.sgml +++ b/doc/src/sgml/ref/create_type.sgml @@ -1,5 +1,5 @@ @@ -116,7 +116,7 @@ CREATE TYPE name ( representation. If this function is not supplied, the type cannot participate in binary input. The binary representation should be chosen to be cheap to convert to internal form, while being reasonably - portable. (For example, the standard integer datatypes use network + portable. (For example, the standard integer data types use network byte order as the external binary representation, while the internal representation is in the machine's native byte order.) The receive function should perform adequate checking to ensure that the value is @@ -124,7 +124,7 @@ CREATE TYPE name ( The receive function may be declared as taking one argument of type internal, or two arguments of types internal and oid. It must return a value of the data type itself. - (The first argument is a pointer to a StringInfo buffer + (The first argument is a pointer to a StringInfo buffer holding the received byte string; the optional second argument is the element type in case this is an array type.) Similarly, the optional send_function converts diff --git a/doc/src/sgml/ref/create_view.sgml b/doc/src/sgml/ref/create_view.sgml index c8cb21ab15..1027413cf2 100644 --- a/doc/src/sgml/ref/create_view.sgml +++ b/doc/src/sgml/ref/create_view.sgml @@ -1,5 +1,5 @@ @@ -30,7 +30,7 @@ class="PARAMETER">column_name [, ...] ) ] AS column_name [, ...] ) ] AS MOVE/FETCH now returns the actual number of rows moved/fetched, or zero if at the beginning/end of the cursor - Prior releases would return the tuple count passed to the + Prior releases would return the row count passed to the command, not the actual number of rows FETCHed or MOVEd. @@ -1588,7 +1588,7 @@ of locale? Disable LIMIT #,# syntax; now only LIMIT # OFFSET # supported (Bruce) Increase identifier length to 63 (Neil, Bruce) UNION fixes for merging >= 3 columns of different lengths (Tom) -Add DEFAULT keyword to INSERT, e.g., INSERT ... (..., DEFAULT, ...) (Rod) +Add DEFAULT key word to INSERT, e.g., INSERT ... (..., DEFAULT, ...) (Rod) Allow views to have default values using ALTER COLUMN ... SET DEFAULT (Neil) Fail on INSERTs with column lists that don't supply all column values, e.g., INSERT INTO tab (col1, col2) VALUES ('val1'); (Rod) Fix for join aliases (Tom) @@ -2739,7 +2739,7 @@ AIX, MSWIN, VAX, N32K fixes (Tom) Multibytes fixes (Tom) Unicode fixes (Tatsuo) Optimizer improvements (Tom) -Fix for whole tuples in functions (Tom) +Fix for whole rows in functions (Tom) Fix for pg_ctl and option strings with spaces (Peter E) ODBC fixes (Hiroshi) EXTRACT can now take string argument (Thomas) @@ -2890,7 +2890,7 @@ Modify pg_dump to better handle user-defined items in template1 (Philip) Allow LIMIT in VIEW (Tom) Require cursor FETCH to honor LIMIT (Tom) Allow PRIMARY/FOREIGN Key definitions on inherited columns (Stephan) -Allow ORDER BY, LIMIT in sub-selects (Tom) +Allow ORDER BY, LIMIT in subqueries (Tom) Allow UNION in CREATE RULE (Tom) Make ALTER/DROP TABLE rollback-able (Vadim, Tom) Store initdb collation in pg_control so collation cannot be changed (Tom) @@ -2923,7 +2923,7 @@ Improved handling of file descriptor cache (Tom) New warning code about auto-created table alias entries (Bruce) Overhaul initdb process (Tom, Peter E) Overhaul of inherited tables; inherited tables now accessed by default; - new ONLY keyword prevents it (Chris Bitmead, Tom) + new ONLY key word prevents it (Chris Bitmead, Tom) ODBC cleanups/improvements (Nick Gorham, Stephan Szabo, Zoltan Kovacs, Michael Fork) Allow renaming of temp tables (Tom) @@ -2931,7 +2931,7 @@ Overhaul memory manager contexts (Tom) pg_dumpall uses CREATE USER or CREATE GROUP rather using COPY (Peter E) Overhaul pg_dump (Philip Warner) Allow pg_hba.conf secondary password file to specify only username (Peter E) -Allow TEMPORARY or TEMP keyword when creating temporary tables (Bruce) +Allow TEMPORARY or TEMP key word when creating temporary tables (Bruce) New memory leak checker (Karel) New SET SESSION CHARACTERISTICS (Thomas) Allow nested block comments (Thomas) @@ -3104,8 +3104,8 @@ Fix for mergejoin plans (Tom) Fix TRUNCATE failure on relations with indexes (Tom) Avoid database-wide restart on write error (Hiroshi) Fix nodeMaterial to honor chgParam by recomputing its output (Tom) -Fix VACUUM problem with moving chain of update tuples when source and - destination of a tuple lie on the same page (Tom) +Fix VACUUM problem with moving chain of update row versions when source + and destination of a row version lie on the same page (Tom) Fix user.c CommandCounterIncrement (Tom) Fix for AM/PM boundary problem in to_char() (Karel Zak) Fix TIME aggregate handling (Tom) @@ -3538,7 +3538,7 @@ Add btree indexing of boolean values, >= and <= (Don Baccus) Print current line number when COPY FROM fails (Massimo) Recognize POSIX time zone e.g. "PST+8" and "GMT-8" (Thomas) Add DEC as synonym for DECIMAL (Thomas) -Add SESSION_USER as SQL92 keyword, same as CURRENT_USER (Thomas) +Add SESSION_USER as SQL92 key word, same as CURRENT_USER (Thomas) Implement SQL92 column aliases (aka correlation names) (Thomas) Implement SQL92 join syntax (Thomas) Make INTERVAL reserved word allowed as a column identifier (Thomas) @@ -3822,7 +3822,7 @@ ACL file descriptor leak fix(Atsushi Ogawa) New expresssion subtree code(Tom) Avoid disk writes for read-only transactions(Vadim) Fix for removal of temp tables if last transaction was aborted(Bruce) -Fix to prevent too large tuple from being created(Bruce) +Fix to prevent too large row from being created(Bruce) plpgsql fixes Allow port numbers 32k - 64k(Bruce) Add ^ precidence(Bruce) @@ -4226,7 +4226,7 @@ Port to NetBSD/m68k(Mr. Mutsuki Nakajima) Port to NetBSD/sun3(Mr. Mutsuki Nakajima) Port to NetBSD/macppc(Toshimi Aoki) Fix for tcl/tk configuration(Vince) -Removed CURRENT keyword for rule queries(Jan) +Removed CURRENT key word for rule queries(Jan) NT dynamic loading now works(Daniel Horak) Add ARM32 support(Andrew McMurry) Better support for HP-UX 11 and UnixWare @@ -4589,7 +4589,7 @@ New INSERT INTO TABLE DEFAULT VALUES statement available(Thomas) New DECLARE and FETCH feature(Thomas) libpq's internal structures now not exported(Tom) Allow up to 8 key indexes(Bruce) -Remove ARCHIVE keyword, that is no longer used(Thomas) +Remove ARCHIVE key word, that is no longer used(Thomas) pg_dump -n flag to supress quotes around indentifiers disable system columns for views(Jan) new INET and CIDR types for network addresses(TomH, Paul) @@ -4777,7 +4777,7 @@ Fix for "Datasize = 0" error(Vadim) Prevent \do from wrapping(Bruce) Remove duplicate Russian character set entries Sunos4 cleanup -Allow optional TABLE keyword in LOCK and SELECT INTO(Thomas) +Allow optional TABLE key word in LOCK and SELECT INTO(Thomas) CREATE SEQUENCE options to allow a negative integer(Thomas) Add "PASSWORD" as an allowed column identifier(Thomas) Add checks for UNION target fields(Bruce) @@ -4971,7 +4971,7 @@ Fix for index scans in MergeJion(Vadim) Enhancements ------------ -Subselects with EXISTS, IN, ALL, ANY keywords (Vadim, Bruce, Thomas) +Subselects with EXISTS, IN, ALL, ANY key words (Vadim, Bruce, Thomas) New User Manual(Thomas, others) Speedup by inlining some frequently-called functions Real deadlock detection, no more timeouts(Bruce) @@ -5089,7 +5089,7 @@ Bring in the PostODBC source tree as part of our standard distribution(Marc) A minor patch for HP/UX 10 vs 9(Stan) New pg_attribute.atttypmod for type-specific info like varchar length(Bruce) UnixWare patches(Billy) -New i386 'lock' for spin lock asm(Billy) +New i386 'lock' for spinlock asm(Billy) Support for multiplexed backends is removed Start an OpenBSD port Start an AUX port @@ -5234,13 +5234,13 @@ Check for geometric intersections at endpoints to avoid rounding ugliness(Thomas Catch non-functional delete attempts(Vadim) Change time function names to be more consistent(Michael Reifenberg) Check for zero divides(Michael Reifenberg) -Fix very old bug which made tuples changed/inserted by a commnd +Fix very old bug which made rows changed/inserted by a command visible to the command itself (so we had multiple update of - updated tuples, etc)(Vadim) + updated rows, etc.)(Vadim) Fix for SELECT null, 'fail' FROM pg_am (Patrick) SELECT NULL as EMPTY_FIELD now allowed(Patrick) Remove un-needed signal stuff from contrib/pginterface -Fix OR (where x != 1 or x isnull didn't return tuples with x NULL) (Vadim) +Fix OR (where x != 1 or x isnull didn't return rows with x NULL) (Vadim) Fix time_cmp function (Vadim) Fix handling of functions with non-attribute first argument in WHERE clauses (Vadim) @@ -5254,7 +5254,7 @@ Default genetic optimizer GEQO parameter is now 8(Bruce) Allow use parameters in target list having aggregates in functions(Vadim) Added JDBC driver as an interface(Adrian & Peter) pg_password utility -Return number of tuples inserted/affected by INSERT/UPDATE/DELETE etc.(Vadim) +Return number of rows inserted/affected by INSERT/UPDATE/DELETE etc.(Vadim) Triggers implemented with CREATE TRIGGER (SQL3)(Vadim) SPI (Server Programming Interface) allows execution of queries inside C-functions (Vadim) @@ -5481,7 +5481,7 @@ fix local buffers leak in transaction aborts (Vadim) fix file manager memmory leaks, cleanups (Vadim, Massimo) fix storage manager memmory leaks (Vadim) fix btree duplicates handling (Vadim) -fix deleted tuples re-incarnation caused by vacuum (Vadim) +fix deleted rows reincarnation caused by vacuum (Vadim) fix SELECT varchar()/char() INTO TABLE made zero-length fields(Bruce) many psql, pg_dump, and libpq memory leaks fixed using Purify (Igor) @@ -6026,7 +6026,7 @@ Enhancements: * added PQdisplayTuples() to libpq and changed monitor and psql to use it * added NeXT port (requires SysVIPC implementation) * added CAST .. AS ... syntax - * added ASC and DESC keywords + * added ASC and DESC key words * added 'internal' as a possible language for CREATE FUNCTION internal functions are C functions which have been statically linked into the postgres backend. @@ -6075,7 +6075,7 @@ Copyright change: Incompatibilities: * date formats have to be MM-DD-YYYY (or DD-MM-YYYY if you're using EUROPEAN STYLE). This follows SQL-92 specs. - * "delimiters" is now a keyword + * "delimiters" is now a key word Enhancements: * sql LIKE syntax has been added @@ -6160,10 +6160,10 @@ more compliant to the SQL-92 standard): (Also, aggregates can now be overloaded, i.e. you can define your own MAX aggregate to take in a user-defined type.) * CHANGE ACL removed. GRANT/REVOKE syntax added. - - Privileges can be given to a group using the "GROUP" keyword. + - Privileges can be given to a group using the "GROUP" key word. For example: GRANT SELECT ON foobar TO GROUP my_group; - The keyword 'PUBLIC' is also supported to mean all users. + The key word 'PUBLIC' is also supported to mean all users. Privileges can only be granted or revoked to one user or group at a time. @@ -6180,7 +6180,7 @@ Bug fixes: * the bug where aggregates of empty tables were not run has been fixed. Now, aggregates run on empty tables will return the initial conditions of the aggregates. Thus, COUNT of an empty table will now properly return 0. - MAX/MIN of an empty table will return a tuple of value NULL. + MAX/MIN of an empty table will return a row of value NULL. * allow the use of \; inside the monitor * the LISTEN/NOTIFY asynchronous notification mechanism now work * NOTIFY in rule action bodies now work @@ -6206,7 +6206,7 @@ Other changes and enhancements: libpgtcl changes: * The -oid option has been added to the "pg_result" tcl command. - pg_result -oid returns oid of the last tuple inserted. If the + pg_result -oid returns oid of the last row inserted. If the last command was not an INSERT, then pg_result -oid returns "". * the large object interface is available as pg_lo* tcl commands: pg_lo_open, pg_lo_close, pg_lo_creat, etc. diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml index 40e7103e54..c5fb952912 100644 --- a/doc/src/sgml/rules.sgml +++ b/doc/src/sgml/rules.sgml @@ -1,4 +1,4 @@ - + The Rule System @@ -770,7 +770,7 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; the stage. Old table rows aren't overwritten, and this is why ROLLBACK is fast. In an UPDATE, the new result row is inserted into the table (after stripping the - CTID) and in the tuple header of the old row, which the + CTID) and in the row header of the old row, which the CTID pointed to, the cmax and xmax entries are set to the current command counter and current transaction ID. Thus the old row is hidden, and after diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index 10ac77f275..382e1856c5 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -1,5 +1,5 @@ @@ -878,9 +878,9 @@ SET ENABLE_SEQSCAN TO OFF; Specifies the maximum amount of memory to be used by VACUUM to keep track of to-be-reclaimed - tuples. The value is specified in kilobytes, and defaults to - 8192 kilobytes. Larger settings may improve the speed of - vacuuming large tables that have many deleted tuples. + rows. The value is specified in kilobytes, and defaults to + 8192 kB. Larger settings may improve the speed of + vacuuming large tables that have many deleted rows. @@ -1333,7 +1333,7 @@ SET ENABLE_SEQSCAN TO OFF; Sets the query planner's estimate of the cost of processing - each tuple during a query. This is measured as a fraction of + each row during a query. This is measured as a fraction of the cost of a sequential page fetch. The default is 0.01. @@ -1344,7 +1344,7 @@ SET ENABLE_SEQSCAN TO OFF; Sets the query planner's estimate of the cost of processing - each index tuple during an index scan. This is measured as a + each index row during an index scan. This is measured as a fraction of the cost of a sequential page fetch. The default is 0.001. diff --git a/doc/src/sgml/syntax.sgml b/doc/src/sgml/syntax.sgml index aa05367d16..0a31d1127c 100644 --- a/doc/src/sgml/syntax.sgml +++ b/doc/src/sgml/syntax.sgml @@ -1,5 +1,5 @@ @@ -1050,7 +1050,7 @@ CREATE FUNCTION dept(text) RETURNS dept where the operator token follows the syntax rules of , or is one of the - keywords AND, OR, and + key words AND, OR, and NOT, or is a qualified operator name OPERATOR(schema.operatorname) @@ -1292,7 +1292,7 @@ SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) An array constructor is an expression that builds an array value from values for its member elements. A simple array constructor - consists of the keyword ARRAY, a left square bracket + consists of the key word ARRAY, a left square bracket [, one or more expressions (separated by commas) for the array element values, and finally a right square bracket ]. For example, @@ -1311,7 +1311,7 @@ SELECT ARRAY[1,2,3+4]; Multidimensional array values can be built by nesting array constructors. - In the inner constructors, the keyword ARRAY may + In the inner constructors, the key word ARRAY may be omitted. For example, these produce the same result: @@ -1352,7 +1352,7 @@ select ARRAY[f1, f2, '{{9,10},{11,12}}'::int[]] from arr; It is also possible to construct an array from the results of a subquery. In this form, the array constructor is written with the - keyword ARRAY followed by a parenthesized (not + key word ARRAY followed by a parenthesized (not bracketed) subquery. For example: SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%'); @@ -1361,9 +1361,9 @@ SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%'); {2011,1954,1948,1952,1951,1244,1950,2005,1949,1953,2006,31} (1 row) - The sub-select must return a single column. The + The subquery must return a single column. The resulting one-dimensional array will have an element for each row in the - sub-select result, with an element type matching that of the sub-select's + subquery result, with an element type matching that of the subquery's output column. diff --git a/doc/src/sgml/typeconv.sgml b/doc/src/sgml/typeconv.sgml index 59e88da29a..d98bb7d968 100644 --- a/doc/src/sgml/typeconv.sgml +++ b/doc/src/sgml/typeconv.sgml @@ -1,5 +1,5 @@ @@ -297,7 +297,7 @@ If only one candidate remains, use it; else continue to the next step. Run through all candidates and keep those that accept preferred types (of the -input datatype's type category) at the most positions where type conversion +input data type's type category) at the most positions where type conversion will be required. Keep all candidates if none accept preferred types. If only one candidate remains, use it; else continue to the next step. @@ -550,7 +550,7 @@ If only one candidate remains, use it; else continue to the next step. Run through all candidates and keep those that accept preferred types (of the -input datatype's type category) at the most positions where type conversion +input data type's type category) at the most positions where type conversion will be required. Keep all candidates if none accept preferred types. If only one candidate remains, use it; else continue to the next step. diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml index dc5c3bd4c2..0c1a7610df 100644 --- a/doc/src/sgml/xfunc.sgml +++ b/doc/src/sgml/xfunc.sgml @@ -1,5 +1,5 @@ @@ -2122,7 +2122,7 @@ CREATE OR REPLACE FUNCTION testpassbyval(integer, integer) RETURNS SETOF __testp of its arguments and the type it is expected to return. The routines are called get_fn_expr_rettype(FmgrInfo *flinfo) and get_fn_expr_argtype(FmgrInfo *flinfo, int argnum). - They return the result or argument type OID, or InvalidOid if the + They return the result or argument type OID, or InvalidOid if the information is not available. The structure flinfo is normally accessed as fcinfo->flinfo. The parameter argnum diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index aa9e39794e..85aba8abe7 100644 --- a/doc/src/sgml/xindex.sgml +++ b/doc/src/sgml/xindex.sgml @@ -1,5 +1,5 @@ @@ -431,7 +431,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/xindex.sgml,v 1.33 2003/10/21 23:28:42 tgl The least error-prone way to define a related set of comparison operators - is to write the btree comparison support function first, and then write the + is to write the B-tree comparison support function first, and then write the other functions as one-line wrappers around the support function. This reduces the odds of getting inconsistent results for corner cases. Following this approach, we first write @@ -600,15 +600,15 @@ CREATE OPERATOR CLASS complex_abs_ops PostgreSQL uses operator classes to infer the properties of operators in more ways than just whether they can be used with indexes. Therefore, you might want to create operator classes - even if you have no intention of indexing any columns of your datatype. + even if you have no intention of indexing any columns of your data type. In particular, there are SQL features such as ORDER BY and DISTINCT that require comparison and sorting of values. - To implement these features on a user-defined datatype, + To implement these features on a user-defined data type, PostgreSQL looks for the default B-tree operator - class for the datatype. The equals member of this operator + class for the data type. The equals member of this operator class defines the system's notion of equality of values for GROUP BY and DISTINCT, and the sort ordering imposed by the operator class defines the default ORDER BY @@ -621,16 +621,16 @@ CREATE OPERATOR CLASS complex_abs_ops - If there is no default B-tree operator class for a datatype, the system + If there is no default B-tree operator class for a data type, the system will look for a default hash operator class. But since that kind of operator class only provides equality, in practice it is only enough to support array equality. - When there is no default operator class for a datatype, you will get + When there is no default operator class for a data type, you will get errors like could not identify an ordering operator if you - try to use these SQL features with the datatype. + try to use these SQL features with the data type. diff --git a/doc/src/sgml/xoper.sgml b/doc/src/sgml/xoper.sgml index 39c49d4036..6e5c966bb5 100644 --- a/doc/src/sgml/xoper.sgml +++ b/doc/src/sgml/xoper.sgml @@ -1,5 +1,5 @@ @@ -128,8 +128,8 @@ SELECT (a + b) AS c FROM test_complex; a WHERE clause like tab1.x = tab2.y, where tab1.x and tab2.y are of a user-defined type, and suppose that tab2.y is indexed. The optimizer cannot generate an - indexscan unless it can determine how to flip the clause around to - tab2.y = tab1.x, because the indexscan machinery expects + index scan unless it can determine how to flip the clause around to + tab2.y = tab1.x, because the index-scan machinery expects to see the indexed column on the left of the operator it is given. PostgreSQL will not simply assume that this is a valid transformation --- the creator of the @@ -348,8 +348,8 @@ table1.column1 OP table2.column2 the operator, since of course the referencing operator class couldn't exist yet. But attempts to use the operator in hash joins will fail at runtime if no such operator class exists. The system needs the - operator class to find the datatype-specific hash function for the - operator's input datatype. Of course, you must also supply a suitable + operator class to find the data-type-specific hash function for the + operator's input data type. Of course, you must also supply a suitable hash function before you can create the operator class. @@ -370,7 +370,7 @@ table1.column1 OP table2.column2 - The function underlying a hashjoinable operator must be marked + The function underlying a hash-joinable operator must be marked immutable or stable. If it is volatile, the system will never attempt to use the operator for a hash join. @@ -378,14 +378,14 @@ table1.column1 OP table2.column2 - If a hashjoinable operator has an underlying function that is marked + If a hash-joinable operator has an underlying function that is marked strict, the - function must also be complete: that is, it should return TRUE or - FALSE, never NULL, for any two non-NULL inputs. If this rule is + function must also be complete: that is, it should return true or + false, never null, for any two nonnull inputs. If this rule is not followed, hash-optimization of IN operations may generate wrong results. (Specifically, IN might return - FALSE where the correct answer per spec would be NULL; or it might - yield an error complaining that it wasn't prepared for a NULL result.) + false where the correct answer according to the standard would be null; or it might + yield an error complaining that it wasn't prepared for a null result.) @@ -488,7 +488,7 @@ table1.column1 OP table2.column2 - The function underlying a mergejoinable operator must be marked + The function underlying a merge-joinable operator must be marked immutable or stable. If it is volatile, the system will never attempt to use the operator for a merge join. diff --git a/doc/src/sgml/xtypes.sgml b/doc/src/sgml/xtypes.sgml index afd35ab700..bd2398e51e 100644 --- a/doc/src/sgml/xtypes.sgml +++ b/doc/src/sgml/xtypes.sgml @@ -1,5 +1,5 @@ @@ -131,7 +131,7 @@ complex_out(PG_FUNCTION_ARGS) Optionally, a user-defined type can provide binary input and output routines. Binary I/O is normally faster but less portable than textual I/O. As with textual I/O, it is up to you to define exactly what the - external binary representation is. Most of the built-in datatypes + external binary representation is. Most of the built-in data types try to provide a machine-independent binary representation. For complex, we will piggy-back on the binary I/O converters for type float8: @@ -241,7 +241,7 @@ CREATE TYPE complex ( the total length in bytes of the datum (including itself). The C functions operating on the data type must be careful to unpack any toasted values they are handed (this detail can normally be hidden in the - GETARG macros). Then, + GETARG macros). Then, when running the CREATE TYPE command, specify the internal length as variable and select the appropriate storage option.