diff --git a/doc/src/sgml/arch-dev.sgml b/doc/src/sgml/arch-dev.sgml index 9e8483ad2e..447dcc5241 100644 --- a/doc/src/sgml/arch-dev.sgml +++ b/doc/src/sgml/arch-dev.sgml @@ -1,5 +1,5 @@ @@ -8,7 +8,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.19 2003/03/25 16:15:35 pe Author - This chapter originally appeared as a part of + This chapter originated as part of , Stefan Simkovics' Master's Thesis prepared at Vienna University of Technology under the direction of O.Univ.Prof.Dr. Georg Gottlob and Univ.Ass. Mag. Katrin Seyr. @@ -41,7 +41,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.19 2003/03/25 16:15:35 pe A connection from an application program to the PostgreSQL server has to be established. The application program transmits a - query to the server and receives the results sent back by the server. + query to the server and waits to receive the results sent back by the + server. @@ -49,7 +50,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.19 2003/03/25 16:15:35 pe The parser stage checks the query transmitted by the application - program (client) for correct syntax and creates + program for correct syntax and creates a query tree. @@ -60,9 +61,9 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.19 2003/03/25 16:15:35 pe the query tree created by the parser stage and looks for any rules (stored in the system catalogs) to apply to - the querytree and performs the + the query tree. It performs the transformations given in the rule bodies. - One application of the rewrite system is given in the realization of + One application of the rewrite system is in the realization of views. @@ -79,7 +80,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.19 2003/03/25 16:15:35 pe The planner/optimizer takes the (rewritten) querytree and creates a - queryplan that will be the input to the + query plan that will be the input to the executor. @@ -108,8 +109,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.19 2003/03/25 16:15:35 pe - In the following sections we will cover every of the above listed items - in more detail to give a better understanding on PostgreSQL's internal + In the following sections we will cover each of the above listed items + in more detail to give a better understanding of PostgreSQL's internal control and data structures. @@ -119,7 +120,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.19 2003/03/25 16:15:35 pe PostgreSQL is implemented using a - simple "process per-user" client/server model. In this model + simple process per user client/server model. In this model there is one client process connected to exactly one server process. As we do not know ahead of time how many connections will be made, we have to @@ -133,19 +134,15 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.19 2003/03/25 16:15:35 pe (postgres processes) communicate with each other using semaphores and shared memory to ensure data integrity - throughout concurrent data access. Figure \ref{connection} - illustrates the interaction of the master process - postmaster the server process - postgres and a client application. + throughout concurrent data access. - The client process can either be the psql frontend (for - interactive SQL queries) or any user application implemented using - the libpg library. Note that applications implemented using - ecpg - (the PostgreSQL embedded SQL preprocessor for C) - also use this library. + The client process can be any program that understands the + PostgreSQL protocol described in + . Many clients are based on the + C-language library libpq, but several independent + implementations exist, such as the Java JDBC driver. @@ -156,17 +153,6 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.19 2003/03/25 16:15:35 pe executes the plan and returns the retrieved tuples to the client by transmitting them over the established connection. - - - @@ -255,192 +241,47 @@ $Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.19 2003/03/25 16:15:35 pe understand what happens there. - - For a better understanding of the data structures used in - PostgreSQL - for the processing of a query we use an example to illustrate the - changes made to these data structures in every stage. - This example contains the following simple query that will be used in - various descriptions and figures throughout the following - sections. The query assumes that the tables given in - The Supplier Database - - have already been defined. - - - A Simple Select - - -select s.sname, se.pno - from supplier s, sells se - where s.sno > 2 and s.sno = se.sno; - - - - - - Figure \ref{parsetree} shows the parse tree built by the - grammar rules and actions given in gram.y for the query - given in - (without the operator tree for - the where clause which is shown in figure \ref{where_clause} - because there was not enough space to show both data structures in one - figure). - - - - The top node of the tree is a SelectStmt node. For every entry - appearing in the from clause of the SQL query a RangeVar - node is created holding the name of the alias and a pointer to a - RelExpr node holding the name of the relation. All - RangeVar nodes are collected in a list which is attached to the field - fromClause of the SelectStmt node. - - - - For every entry appearing in the select list of the SQL query a - ResTarget node is created holding a pointer to an Attr - node. The Attr node holds the relation name of the entry and - a pointer to a Value node holding the name of the - attribute. - All ResTarget nodes are collected to a list which is - connected to the field targetList of the SelectStmt node. - - - - Figure \ref{where_clause} shows the operator tree built for the - where clause of the SQL query given in - - which is attached to the field - qual of the SelectStmt node. The top node of the - operator tree is an A_Expr node representing an AND - operation. This node has two successors called lexpr and - rexpr pointing to two subtrees. The subtree attached to - lexpr represents the qualification s.sno > 2 and the one - attached to rexpr represents s.sno = se.sno. For every - attribute an Attr node is created holding the name of the - relation and a pointer to a Value node holding the name of the - attribute. For the constant term appearing in the query a - Const node is created holding the value. - - - - Transformation Process - The transformation process takes the tree handed back by - the parser as input and steps recursively through it. If - a SelectStmt node is found, it is transformed - to a Query - node that will be the top most node of the new data structure. Figure - \ref{transformed} shows the transformed data structure (the part - for the transformed where clause is given in figure - \ref{transformed_where} because there was not enough space to show all - parts in one figure). + The parser stage creates a parse tree using only fixed rules about + the syntactic structure of SQL. It does not make any lookups in the + system catalogs, so there is no possibility to understand the detailed + semantics of the requested operations. After the parser completes, + the transformation process takes the tree handed + back by the parser as input and does the semantic interpretation needed + to understand which tables, functions, and operators are referenced by + the query. The data structure that is built to represent this + information is called the query tree. - Now a check is made, if the relation names in the - FROM clause are known to the system. For every relation name - that is present in the system catalogs a RTE node is - created containing the relation name, the alias name and - the relation id. From now on the relation ids are used to - refer to the relations given in the query. All RTE nodes - are collected in the range table entry list that is connected - to the field rtable of the Query node. If a name of a - relation that is not known to the system is detected in the query an - error will be returned and the query processing will be aborted. + The reason for separating raw parsing from semantic analysis is that + system catalog lookups can only be done within a transaction, and we + do not wish to start a transaction immediately upon receiving a query + string. The raw parsing stage is sufficient to identify the transaction + control commands (BEGIN, ROLLBACK, etc), and + these can then be correctly executed without any further analysis. + Once we know that we are dealing with an actual query (such as + SELECT or UPDATE), it is okay to + start a transaction if we're not already in one. Only then can the + transformation process be invoked. - Next it is checked if the attribute names used are - contained in the relations given in the query. For every - attribute} that is found a TLE node is created holding a pointer - to a Resdom node (which holds the name of the column) and a - pointer to a VAR node. There are two important numbers in the - VAR node. The field varno gives the position of the - relation containing the current attribute} in the range - table entry list created above. The field varattno gives the - position of the attribute within the relation. If the name - of an attribute cannot be found an error will be returned and - the query processing will be aborted. + The query tree created by the transformation process is structurally + similar to the raw parse tree in most places, but it has many differences + in detail. For example, a FuncCall node in the + parse tree represents something that looks syntactically like a function + call. This may be transformed to either a FuncExpr + or Aggref node depending on whether the referenced + name turns out to be an ordinary function or an aggregate function. + Also, information about the actual datatypes of columns and expression + results is added to the query tree. - - - @@ -461,8 +302,8 @@ current context are performed. implemented deep in the executor. The rule system was called whenever an individual tuple had been accessed. This implementation was removed in 1995 when the last official release - of the PostgreSQL project was transformed into - Postgres95. + of the Berkeley Postgres project was + transformed into Postgres95. @@ -470,7 +311,7 @@ current context are performed. The second implementation of the rule system is a technique called query rewriting. - The rewrite system} is a module + The rewrite system is a module that exists between the parser stage and the planner/optimizer. This technique is still implemented. @@ -479,111 +320,14 @@ current context are performed. - For information on the syntax and creation of rules in the - PostgreSQL system refer to . + The query rewriter is discussed in some detail in + , so there is no need to cover it here. + We will only point out that both the input and the output of the + rewriter are query trees, that is, there is no change in the + representation or level of semantic detail in the trees. Rewriting + can be thought of as a form of macro expansion. - - The Rewrite System - - - The query rewrite system is a module between - the parser stage and the planner/optimizer. It processes the tree handed - back by the parser stage (which represents a user query) and if - there is a rule present that has to be applied to the query it - rewrites the tree to an alternate form. - - - - Techniques To Implement Views - - - Now we will sketch the algorithm of the query rewrite system. For - better illustration we show how to implement views using rules - as an example. - - - - Let the following rule be given: - - - create rule view_rule - as on select - to test_view - do instead - select s.sname, p.pname - from supplier s, sells se, part p - where s.sno = se.sno and - p.pno = se.pno; - - - - - The given rule will be fired whenever a select - against the relation test_view is detected. Instead of - selecting the tuples from test_view the select statement - given in the action part of the rule is executed. - - - - Let the following user-query against test_view be given: - - - select sname - from test_view - where sname <> 'Smith'; - - - - - Here is a list of the steps performed by the query rewrite - system whenever a user-query against test_view appears. (The - following listing is a very informal description of the algorithm just - intended for basic understanding. For a detailed description refer - to ). - - - - <literal>test_view</literal> Rewrite - - - Take the query given in the action part of the rule. - - - - - - Adapt the targetlist to meet the number and order of - attributes given in the user-query. - - - - - - Add the qualification given in the where clause of the - user-query to the qualification of the query given in the - action part of the rule. - - - - - - Given the rule definition above, the user-query will be - rewritten to the following form (Note that the rewriting is done on - the internal representation of the user-query handed back by the - parser stage but the derived new data structure will represent the following - query): - - - select s.sname - from supplier s, sells se, part p - where s.sno = se.sno and - p.pno = se.pno and - s.sname <> 'Smith'; - - - - @@ -591,7 +335,7 @@ current context are performed. The task of the planner/optimizer is to create an optimal - execution plan. It first combines all possible ways of + execution plan. It first considers all possible ways of scanning and joining the relations that appear in a query. All the created paths lead to the same result and it's the @@ -599,6 +343,12 @@ current context are performed. find out which one is the cheapest. + + After the cheapest path is determined, a plan tree + is built to pass to the executor. This represents the desired + execution plan in sufficient detail for the executor to run it. + + Generating Possible Plans @@ -612,28 +362,34 @@ current context are performed. restriction relation.attribute OPR constant. If relation.attribute happens to match the key of the B-tree - index and OPR is anything but '<>' another plan is created using + index and OPR is one of the operators listed in + the index's operator class, another plan is created using the B-tree index to scan the relation. If there are further indexes present and the restrictions in the query happen to match a key of an index further plans will be considered. - After all feasible plans have been found for scanning single - relations, plans for joining relations are created. The - planner/optimizer considers only joins between every two relations for - which there exists a corresponding join clause (i.e. for which a - restriction like where rel1.attr1=rel2.attr2 exists) in the - where qualification. All possible plans are generated for every - join pair considered by the planner/optimizer. The three possible join - strategies are: + After all feasible plans have been found for scanning single relations, + plans for joining relations are created. The planner/optimizer + preferentially considers joins between any two relations for which there + exist a corresponding join clause in the WHERE qualification (i.e. for + which a restriction like where rel1.attr1=rel2.attr2 + exists). Join pairs with no join clause are considered only when there + is no other choice, that is, a particular relation has no available + join clauses to any other relation. All possible plans are generated for + every join pair considered + by the planner/optimizer. The three possible join strategies are: - nested iteration join: The right relation is scanned + nested loop join: The right relation is scanned once for every tuple found in the left relation. This strategy - is easy to implement but can be very time consuming. + is easy to implement but can be very time consuming. (However, + if the right relation can be scanned with an indexscan, this can + be a good strategy. It is possible to use values from the current + row of the left relation as keys for the indexscan of the right.) @@ -643,71 +399,34 @@ current context are performed. attributes before the join starts. Then the two relations are merged together taking into account that both relations are ordered on the join attributes. This kind of join is more - attractive because every relation has to be scanned only once. + attractive because each relation has to be scanned only once. - hash join: the right relation is first hashed on its - join attributes. Next the left relation is scanned and the + hash join: the right relation is first scanned + and loaded into a hash table, using its join attributes as hash keys. + Next the left relation is scanned and the appropriate values of every tuple found are used as hash keys to - locate the tuples in the right relation. + locate the matching tuples in the table. - - - - Data Structure of the Plan - Here we will give a little description of the nodes appearing in the - plan. Figure \ref{plan} shows the plan produced for the query in - example \ref{simple_select}. - - - - The top node of the plan is a MergeJoin node that has two - successors, one attached to the field lefttree and the second - attached to the field righttree. Each of the subnodes represents - one relation of the join. As mentioned above a merge sort - join requires each relation to be sorted. That's why we find - a Sort node in each subplan. The additional qualification given - in the query (s.sno > 2) is pushed down as far as possible and is - attached to the qpqual field of the leaf SeqScan node of - the corresponding subplan. - - - - The list attached to the field mergeclauses of the - MergeJoin node contains information about the join attributes. - The values 65000 and 65001 - for the varno fields in the - VAR nodes appearing in the mergeclauses list (and also in the - targetlist) mean that not the tuples of the current node should be - considered but the tuples of the next deeper nodes (i.e. the top - nodes of the subplans) should be used instead. - - - - Note that every Sort and SeqScan node appearing in figure - \ref{plan} has got a targetlist but because there was not enough space - only the one for the MergeJoin node could be drawn. - - - - Another task performed by the planner/optimizer is fixing the - operator ids in the Expr - and Oper nodes. As - mentioned earlier, PostgreSQL supports a variety of different data - types and even user defined types can be used. To be able to maintain - the huge amount of functions and operators it is necessary to store - them in a system table. Each function and operator gets a unique - operator id. According to the types of the attributes used - within the qualifications etc., the appropriate operator ids - have to be used. + The finished plan tree consists of sequential or index scans of the + base relations, plus nestloop, merge, or hash join nodes as needed, + plus any auxiliary steps needed, such as sort nodes or aggregate-function + calculation nodes. Most of these plan node types have the additional + ability to do selection (discarding rows that do + not meet a specified boolean condition) and projection + (computation of a derived column set based on given column values, + that is, evaluation of scalar expressions where needed). One of + the responsibilities of the planner is to attach selection conditions + from the WHERE clause and computation of required output expressions + to the most appropriate nodes of the plan tree. @@ -717,3365 +436,74 @@ current context are performed. The executor takes the plan handed back by the - planner/optimizer and starts processing the top node. In the case of - our example (the query given in example \ref{simple_select}) the top - node is a MergeJoin node. + planner/optimizer and recursively processes it to extract the required set + of rows. This is essentially a demand-driven pipeline mechanism. + Each time a plan node is called, it must deliver one more tuple, or + report that it is done delivering tuples. + To provide a concrete example, assume that the top + node is a MergeJoin node. Before any merge can be done two tuples have to be fetched (one from each subplan). So the executor recursively calls itself to process the subplans (it starts with the subplan attached to - lefttree). The new top node (the top node of the left subplan) is a - SeqScan node and again a tuple has to be fetched before the node - itself can be processed. The executor calls itself recursively - another time for the subplan attached to lefttree of the - SeqScan node. + lefttree). The new top node (the top node of the left + subplan) is, let's say, a + Sort node and again recursion is needed to obtain + an input tuple. The child node of the Sort might + be a SeqScan node, representing actual reading of a table. + Execution of this node causes the executor to fetch a row from the + table and return it up to the calling node. The Sort + node will repeatedly call its child to obtain all the rows to be sorted. + When the input is exhausted (as indicated by the child node returning + a NULL instead of a tuple), the Sort code performs + the sort, and finally is able to return its first output row, namely + the first one in sorted order. It keeps the remaining rows stored so + that it can deliver them in sorted order in response to later demands. - Now the new top node is a Sort node. As a sort has to be done on - the whole relation, the executor starts fetching tuples - from the Sort node's subplan and sorts them into a temporary - relation (in memory or a file) when the Sort node is visited for - the first time. (Further examinations of the Sort node will - always return just one tuple from the sorted temporary - relation.) + The MergeJoin node similarly demands the first row + from its right subplan. Then it compares the two rows to see if they + can be joined; if so, it returns a join row to its caller. On the next + call, or immediately if it cannot join the current pair of inputs, + it advances to the next row of one table + or the other (depending on how the comparison came out), and again + checks for a match. Eventually, one subplan or the other is exhausted, + and the MergeJoin node returns NULL to indicate that + no more join rows can be formed. - Every time the processing of the Sort node needs a new tuple the - executor is recursively called for the SeqScan node - attached as subplan. The relation (internally referenced by the - value given in the scanrelid field) is scanned for the next - tuple. If the tuple satisfies the qualification given by the tree - attached to qpqual it is handed back, otherwise the next tuple - is fetched until the qualification is satisfied. If the last tuple of - the relation has been processed a NULL pointer is - returned. + Complex queries may involve many levels of plan nodes, but the general + approach is the same: each node computes and returns its next output + row each time it is called. Each node is also responsible for applying + any selection or projection expressions that were assigned to it by + the planner. - After a tuple has been handed back by the lefttree of the - MergeJoin the righttree is processed in the same way. If both - tuples are present the executor processes the MergeJoin - node. Whenever a new tuple from one of the subplans is needed a - recursive call to the executor is performed to obtain it. If a - joined tuple could be created it is handed back and one complete - processing of the plan tree has finished. - - - - Now the described steps are performed once for every tuple, until a - NULL pointer is returned for the processing of the - MergeJoin node, indicating that we are finished. + The executor mechanism is used to evaluate all four basic SQL query types: + SELECT, INSERT, UPDATE, and + DELETE. For SELECT, the top-level executor + code only needs to send each row returned by the query plan tree off + to the client. For INSERT, each returned row is inserted + into the target table specified for the INSERT. (A simple + INSERT ... VALUES command creates a trivial plan tree + consisting of a single Result node, which computes just one + result row. But INSERT ... SELECT may demand the full power + of the executor mechanism.) For UPDATE, the planner arranges + that each computed row includes all the updated column values, plus + the TID (tuple ID, or location) of the original target row; + the executor top level uses this information to create a new updated row + and mark the old row deleted. For DELETE, the only column + that is actually returned by the plan is the TID, and the executor top + level simply uses the TID to visit the target rows and mark them deleted. - -