postgresql/doc/TODO
Bruce Momjian f528e242fc Item done.
< 	o Improve xid wraparound detection by recording per-table rather
< 	  than per-database
2006-12-19 21:57:10 +00:00

1443 lines
54 KiB
Plaintext

PostgreSQL TODO List
====================
Current maintainer: Bruce Momjian (bruce@momjian.us)
Last updated: Tue Dec 19 16:57:04 EST 2006
The most recent version of this document can be viewed at
http://www.postgresql.org/docs/faqs.TODO.html.
#A hyphen, "-", marks changes that will appear in the upcoming 8.3 release.#
#A percent sign, "%", marks items that are easier to implement.#
Bracketed items, "[]", have more detail.
This list contains all known PostgreSQL bugs and feature requests. If
you would like to work on an item, please read the Developer's FAQ
first.
Administration
==============
* Allow major upgrades without dump/reload, perhaps using pg_upgrade
[pg_upgrade]
* Check for unreferenced table files created by transactions that were
in-progress when the server terminated abruptly
http://archives.postgresql.org/pgsql-patches/2006-06/msg00096.php
* Allow administrators to safely terminate individual sessions either
via an SQL function or SIGTERM
Lock table corruption following SIGTERM of an individual backend
has been reported in 8.0. A possible cause was fixed in 8.1, but
it is unknown whether other problems exist. This item mostly
requires additional testing rather than of writing any new code.
http://archives.postgresql.org/pgsql-hackers/2006-08/msg00174.php
* %Set proper permissions on non-system schemas during db creation
Currently all schemas are owned by the super-user because they are
copied from the template1 database.
* Support table partitioning that allows a single table to be stored
in subtables that are partitioned based on the primary key or a WHERE
clause
* Add function to report the time of the most recent server reload
* Allow statistics collector information to be pulled from the collector
process directly, rather than requiring the collector to write a
filesystem file twice a second?
* Allow log_min_messages to be specified on a per-module basis
This would allow administrators to see more detailed information from
specific sections of the backend, e.g. checkpoints, autovacuum, etc.
Another idea is to allow separate configuration files for each module,
or allow arbitrary SET commands to be passed to them.
* Simplify ability to create partitioned tables
This would allow creation of partitioned tables without requiring
creation of rules for INSERT/UPDATE/DELETE, and constraints for
rapid partition selection. Options could include range and hash
partition selection.
* Allow auto-selection of partitioned tables for min/max() operations
* Allow more complex user/database default GUC settings
Currently, ALTER USER and ALTER DATABASE support per-user and
per-database defaults. Consider adding per-user-and-database
defaults so things like search_path can be defaulted for a
specific user connecting to a specific database.
* Improve replication solutions
o Load balancing
You can use any of the master/slave replication servers to use a
standby server for data warehousing. To allow read/write queries to
multiple servers, you need multi-master replication like pgcluster.
o Allow replication over unreliable or non-persistent links
* Configuration files
o Allow commenting of variables in postgresql.conf to restore them
to defaults
Currently, if a variable is commented out, it keeps the
previous uncommented value until a server restarted.
http://archives.postgresql.org/pgsql-hackers/2006-09/msg01481.php
o Allow pg_hba.conf to specify host names along with IP addresses
Host name lookup could occur when the postmaster reads the
pg_hba.conf file, or when the backend starts. Another
solution would be to reverse lookup the connection IP and
check that hostname against the host names in pg_hba.conf.
We could also then check that the host name maps to the IP
address.
o %Allow postgresql.conf file values to be changed via an SQL
API, perhaps using SET GLOBAL
o Allow the server to be stopped/restarted via an SQL API
o Issue a warning if a change-on-restart-only postgresql.conf value
is modified and the server config files are reloaded
o Mark change-on-restart-only values in postgresql.conf
* Tablespaces
o Allow a database in tablespace t1 with tables created in
tablespace t2 to be used as a template for a new database created
with default tablespace t2
All objects in the default database tablespace must have default
tablespace specifications. This is because new databases are
created by copying directories. If you mix default tablespace
tables and tablespace-specified tables in the same directory,
creating a new database from such a mixed directory would create a
new database with tables that had incorrect explicit tablespaces.
To fix this would require modifying pg_class in the newly copied
database, which we don't currently do.
o Allow reporting of which objects are in which tablespaces
This item is difficult because a tablespace can contain objects
from multiple databases. There is a server-side function that
returns the databases which use a specific tablespace, so this
requires a tool that will call that function and connect to each
database to find the objects in each database for that tablespace.
o %Add a GUC variable to control the tablespace for temporary objects
and sort files
It could start with a random tablespace from a supplied list and
cycle through the list.
o Allow WAL replay of CREATE TABLESPACE to work when the directory
structure on the recovery computer is different from the original
o Allow per-tablespace quotas
* Point-In-Time Recovery (PITR)
o Allow a warm standby system to also allow read-only statements
[pitr]
This is useful for checking PITR recovery.
o %Create dump tool for write-ahead logs for use in determining
transaction id for point-in-time recovery
o Allow the PITR process to be debugged and data examined
Monitoring
==========
* Allow server log information to be output as INSERT statements
This would allow server log information to be easily loaded into
a database for analysis.
* %Add ability to monitor the use of temporary sort files
Data Types
==========
* Improve the MONEY data type
Change the MONEY data type to use DECIMAL internally, with special
locale-aware output formatting.
http://archives.postgresql.org/pgsql-general/2005-08/msg01432.php
http://archives.postgresql.org/pgsql-hackers/2006-09/msg01107.php
* Change NUMERIC to enforce the maximum precision
* Add NUMERIC division operator that doesn't round?
Currently NUMERIC _rounds_ the result to the specified precision.
This means division can return a result that multiplied by the
divisor is greater than the dividend, e.g. this returns a value > 10:
SELECT (10::numeric(2,0) / 6::numeric(2,0))::numeric(2,0) * 6;
The positive modulus result returned by NUMERICs might be considered
inaccurate, in one sense.
* Fix data types where equality comparison isn't intuitive, e.g. box
* Allow user-defined types to specify a type modifier at table creation
time
* Allow user-defined types to accept 'typmod' parameters
http://archives.postgresql.org/pgsql-hackers/2005-08/msg01142.php
http://archives.postgresql.org/pgsql-hackers/2005-09/msg00012.php
http://archives.postgresql.org/pgsql-hackers/2006-08/msg00149.php
* Add support for public SYNONYMs
http://archives.postgresql.org/pgsql-hackers/2006-03/msg00519.php
* Fix CREATE CAST on DOMAINs
http://archives.postgresql.org/pgsql-hackers/2006-05/msg00072.php
http://archives.postgresql.org/pgsql-hackers/2006-09/msg01681.php
* Add Globally/Universally Unique Identifier (GUID/UUID)
http://archives.postgresql.org/pgsql-patches/2006-09/msg00209.php
* Add support for SQL-standard GENERATED/IDENTITY columns
http://archives.postgresql.org/pgsql-hackers/2006-07/msg00543.php
* Support a data type with specific enumerated values (ENUM)
http://archives.postgresql.org/pgsql-hackers/2006-08/msg00979.php
* Improve XML support
http://developer.postgresql.org/index.php/XML_Support
* Dates and Times
o Allow infinite dates and intervals just like infinite timestamps
o Merge hardwired timezone names with the TZ database; allow either
kind everywhere a TZ name is currently taken
o Allow TIMESTAMP WITH TIME ZONE to store the original timezone
information, either zone name or offset from UTC [timezone]
If the TIMESTAMP value is stored with a time zone name, interval
computations should adjust based on the time zone rules.
o Fix SELECT '0.01 years'::interval, '0.01 months'::interval
o Add a GUC variable to allow output of interval values in ISO8601
format
o Improve timestamptz subtraction to be DST-aware
Currently, subtracting one date from another that crosses a
daylight savings time adjustment can return '1 day 1 hour', but
adding that back to the first date returns a time one hour in
the future. This is caused by the adjustment of '25 hours' to
'1 day 1 hour', and '1 day' is the same time the next day, even
if daylight savings adjustments are involved.
o Fix interval display to support values exceeding 2^31 hours
o Add overflow checking to timestamp and interval arithmetic
o Add ISO INTERVAL handling
http://archives.postgresql.org/pgsql-hackers/2006-01/msg00250.php
http://archives.postgresql.org/pgsql-bugs/2006-04/msg00248.php
o Support ISO INTERVAL syntax if units cannot be determined from
the string, and are supplied after the string
The SQL standard states that the units after the string
specify the units of the string, e.g. INTERVAL '2' MINUTE
should return '00:02:00'. The current behavior has the units
restrict the interval value to the specified unit or unit
range, INTERVAL '70' SECOND returns '00:00:10'.
For syntax that isn't uniquely ISO or PG syntax, like '1' or
'1:30', treat as ISO if there is a range specification clause,
and as PG if there no clause is present, e.g. interpret '1:30'
MINUTE TO SECOND as '1 minute 30 seconds', and interpret
'1:30' as '1 hour, 30 minutes'.
This makes common cases like SELECT INTERVAL '1' MONTH
SQL-standard results. The SQL standard supports a limited
number of unit combinations and doesn't support unit names in
the string. The PostgreSQL syntax is more flexible in the
range of units supported, e.g. PostgreSQL supports '1 year 1
hour', while the SQL standard does not.
o Add support for year-month syntax, INTERVAL '50-6' YEAR TO MONTH
o Interpret INTERVAL '1 year' MONTH as CAST (INTERVAL '1 year' AS
INTERVAL MONTH), and this should return '12 months'
o Round or truncate values to the requested precision, e.g.
INTERVAL '11 months' AS YEAR should return one or zero
o Support precision, CREATE TABLE foo (a INTERVAL MONTH(3))
* Arrays
o Delay resolution of array expression's data type so assignment
coercion can be performed on empty array expressions
o Add support for arrays of domains
o Add support for arrays of complex types
* Binary Data
o Improve vacuum of large objects, like /contrib/vacuumlo?
o Add security checking for large objects
o Auto-delete large objects when referencing row is deleted
/contrib/lo offers this functionality.
o Allow read/write into TOAST values like large objects
This requires the TOAST column to be stored EXTERNAL.
o Add API for 64-bit large object access
http://archives.postgresql.org/pgsql-hackers/2005-09/msg00781.php
Functions
=========
* Allow INET subnet tests using non-constants to be indexed
* %Add pg_get_acldef(), pg_get_typedefault(), pg_get_attrdef(),
pg_get_tabledef(), pg_get_domaindef(), pg_get_functiondef()
These would be for application use, not for use by pg_dump.
* Allow to_date() and to_timestamp() accept localized month names
* Add missing parameter handling in to_char()
http://archives.postgresql.org/pgsql-hackers/2005-12/msg00948.php
* Allow functions to have a schema search path specified at creation time
* Allow substring/replace() to get/set bit values
* Allow to_char() on interval values to accumulate the highest unit
requested
Some special format flag would be required to request such
accumulation. Such functionality could also be added to EXTRACT.
Prevent accumulation that crosses the month/day boundary because of
the uneven number of days in a month.
o to_char(INTERVAL '1 hour 5 minutes', 'MI') => 65
o to_char(INTERVAL '43 hours 20 minutes', 'MI' ) => 2600
o to_char(INTERVAL '43 hours 20 minutes', 'WK:DD:HR:MI') => 0:1:19:20
o to_char(INTERVAL '3 years 5 months','MM') => 41
* Add ISO day of week format 'ID' to to_char() where Monday = 1
* Add a field 'isoyear' to extract(), based on the ISO week
* Add SPI_gettypmod() to return the typemod for a TupleDesc
* Allow inlining of set-returning functions
* Allow SQL-language functions to return results from RETURNING queries
Multi-Language Support
======================
* Add NCHAR (as distinguished from ordinary varchar),
* Allow locale to be set at database creation
Currently locale can only be set during initdb. No global tables have
locale-aware columns. However, the database template used during
database creation might have locale-aware indexes. The indexes would
need to be reindexed to match the new locale.
* Allow encoding on a per-column basis optionally using the ICU library:
Right now only one encoding is allowed per database. [locale]
http://archives.postgresql.org/pgsql-hackers/2005-03/msg00932.php
http://archives.postgresql.org/pgsql-patches/2005-08/msg00309.php
http://archives.postgresql.org/pgsql-patches/2006-03/msg00233.php
http://archives.postgresql.org/pgsql-hackers/2006-09/msg00662.php
* Add CREATE COLLATE? [locale]
* Support multiple simultaneous character sets, per SQL92
* Improve UTF8 combined character handling?
* Add octet_length_server() and octet_length_client()
* Make octet_length_client() the same as octet_length()?
* Fix problems with wrong runtime encoding conversion for NLS message files
* Add URL to more complete multi-byte regression tests
http://archives.postgresql.org/pgsql-hackers/2005-07/msg00272.php
* Fix ILIKE and regular expressions to handle case insensitivity
properly in multibyte encodings
http://archives.postgresql.org/pgsql-bugs/2005-10/msg00001.php
http://archives.postgresql.org/pgsql-patches/2005-11/msg00173.php
* Set client encoding based on the client operating system encoding
Currently client_encoding is set in postgresql.conf, which
defaults to the server encoding.
http://archives.postgresql.org/pgsql-hackers/2006-08/msg01696.php
Views / Rules
=============
* Automatically create rules on views so they are updateable, per SQL99
We can only auto-create rules for simple views. For more complex
cases users will still have to write rules manually.
http://archives.postgresql.org/pgsql-hackers/2006-03/msg00586.php
* Add the functionality for WITH CHECK OPTION clause of CREATE VIEW
* Allow NOTIFY in rules involving conditionals
* Allow VIEW/RULE recompilation when the underlying tables change
Another issue is whether underlying table changes should be reflected
in the view, e.g. should SELECT * show additional columns if they
are added after the view is created.
SQL Commands
============
* Add CORRESPONDING BY to UNION/INTERSECT/EXCEPT
* Add ROLLUP, CUBE, GROUPING SETS options to GROUP BY
* %Allow SET CONSTRAINTS to be qualified by schema/table name
* %Add a separate TRUNCATE permission
Currently only the owner can TRUNCATE a table because triggers are not
called, and the table is locked in exclusive mode.
* Allow PREPARE of cursors
* Allow finer control over the caching of prepared query plans
Currently, queries prepared via the libpq API are planned on first
execute using the supplied parameters --- allow SQL PREPARE to do the
same. Also, allow control over replanning prepared queries either
manually or automatically when statistics for execute parameters
differ dramatically from those used during planning.
* Invalidate prepared queries, like INSERT, when the table definition
is altered
* Allow LISTEN/NOTIFY to store info in memory rather than tables?
Currently LISTEN/NOTIFY information is stored in pg_listener. Storing
such information in memory would improve performance.
* Add optional textual message to NOTIFY
This would allow an informational message to be added to the notify
message, perhaps indicating the row modified or other custom
information.
* Add a GUC variable to warn about non-standard SQL usage in queries
* Add SQL-standard MERGE command, typically used to merge two tables
[merge]
This is similar to UPDATE, then for unmatched rows, INSERT.
Whether concurrent access allows modifications which could cause
row loss is implementation independent.
* Add REPLACE or UPSERT command that does UPDATE, or on failure, INSERT
[merge]
To implement this cleanly requires that the table have a unique index
so duplicate checking can be easily performed. It is possible to
do it without a unique index if we require the user to LOCK the table
before the MERGE.
* Add NOVICE output level for helpful messages like automatic sequence/index
creation
* Add RESET CONNECTION command to reset all session state
This would include resetting of all variables (RESET ALL), dropping of
temporary tables, removing any NOTIFYs, cursors, open transactions,
prepared queries, currval()s, etc. This could be used for connection
pooling. We could also change RESET ALL to have this functionality.
The difficult of this features is allowing RESET ALL to not affect
changes made by the interface driver for its internal use. One idea
is for this to be a protocol-only feature. Another approach is to
notify the protocol when a RESET CONNECTION command is used.
http://archives.postgresql.org/pgsql-patches/2006-04/msg00192.php
* Add GUC to issue notice about statements that use unjoined tables
* Allow EXPLAIN to identify tables that were skipped because of
constraint_exclusion
* Allow EXPLAIN output to be more easily processed by scripts
* Enable standard_conforming_strings
* Make standard_conforming_strings the default in 8.3?
When this is done, backslash-quote should be prohibited in non-E''
strings because of possible confusion over how such strings treat
backslashes. Basically, '' is always safe for a literal single
quote, while \' might or might not be based on the backslash
handling rules.
* Simplify dropping roles that have objects in several databases
* Allow COMMENT ON to accept an expression rather than just a string
* Allow the count returned by SELECT, etc to be to represent as an int64
to allow a higher range of values
* Add SQL99 WITH clause to SELECT
* Add SQL:2003 WITH RECURSIVE (hierarchical) queries to SELECT
* Add DEFAULT .. AS OWNER so permission checks are done as the table
owner
This would be useful for SERIAL nextval() calls and CHECK constraints.
* Add a GUC to control whether BEGIN inside a transcation should abort
the transaction.
* Allow DISTINCT to work in multiple-argument aggregate calls
* Add column to pg_stat_activity that shows the progress of long-running
commands like CREATE INDEX and VACUUM
* Implement SQL:2003 window functions
* CREATE
o Allow CREATE TABLE AS to determine column lengths for complex
expressions like SELECT col1 || col2
o Use more reliable method for CREATE DATABASE to get a consistent
copy of db?
* UPDATE
o Allow UPDATE tab SET ROW (col, ...) = (SELECT...)
http://archives.postgresql.org/pgsql-hackers/2006-07/msg01306.php
* ALTER
o %Have ALTER TABLE RENAME rename SERIAL sequence names
o Add ALTER DOMAIN to modify the underlying data type
o %Allow ALTER TABLE ... ALTER CONSTRAINT ... RENAME
http://archives.postgresql.org/pgsql-patches/2006-02/msg00168.php
o %Allow ALTER TABLE to change constraint deferrability and actions
o Add missing object types for ALTER ... SET SCHEMA
o Allow ALTER TABLESPACE to move to different directories
o Allow databases to be moved to different tablespaces
o Allow moving system tables to other tablespaces, where possible
Currently non-global system tables must be in the default database
tablespace. Global system tables can never be moved.
o Prevent parent tables from altering or dropping constraints
like CHECK that are inherited by child tables unless CASCADE
is used
o %Prevent child tables from altering or dropping constraints
like CHECK that were inherited from the parent table
o Have ALTER INDEX update the name of a constraint using that index
o Add ALTER TABLE RENAME CONSTRAINT, update index name also
* CLUSTER
o Make CLUSTER preserve recently-dead tuples per MVCC requirements
o Automatically maintain clustering on a table
This might require some background daemon to maintain clustering
during periods of low usage. It might also require tables to be only
partially filled for easier reorganization. Another idea would
be to create a merged heap/index data file so an index lookup would
automatically access the heap data too. A third idea would be to
store heap rows in hashed groups, perhaps using a user-supplied
hash function.
http://archives.postgresql.org/pgsql-performance/2004-08/msg00349.php
o %Add default clustering to system tables
To do this, determine the ideal cluster index for each system
table and set the cluster setting during initdb.
* COPY
o Allow COPY to report error lines and continue
This requires the use of a savepoint before each COPY line is
processed, with ROLLBACK on COPY failure.
o Allow COPY on a newly-created table to skip WAL logging
On crash recovery, the table involved in the COPY would
be removed or have its heap and index files truncated. One
issue is that no other backend should be able to add to
the table at the same time, which is something that is
currently allowed.
* GRANT/REVOKE
o Allow column-level privileges
o %Allow GRANT/REVOKE permissions to be applied to all schema objects
with one command
The proposed syntax is:
GRANT SELECT ON ALL TABLES IN public TO phpuser;
GRANT SELECT ON NEW TABLES IN public TO phpuser;
o Allow GRANT/REVOKE permissions to be inherited by objects based on
schema permissions
o Allow SERIAL sequences to inherit permissions from the base table?
* CURSOR
o Allow UPDATE/DELETE WHERE CURRENT OF cursor
This requires using the row ctid to map cursor rows back to the
original heap row. This become more complicated if WITH HOLD cursors
are to be supported because WITH HOLD cursors have a copy of the row
and no FOR UPDATE lock.
o Prevent DROP TABLE from dropping a row referenced by its own open
cursor?
* INSERT
o Allow INSERT/UPDATE of the system-generated oid value for a row
o In rules, allow VALUES() to contain a mixture of 'old' and 'new'
references
* SHOW/SET
o Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM
ANALYZE, and CLUSTER
o Add SET PATH for schemas?
This is basically the same as SET search_path.
* Referential Integrity
o Add MATCH PARTIAL referential integrity
o Change foreign key constraint for array -> element to mean element
in array?
o Enforce referential integrity for system tables
o Fix problem when cascading referential triggers make changes on
cascaded tables, seeing the tables in an intermediate state
http://archives.postgresql.org/pgsql-hackers/2005-09/msg00174.php
http://archives.postgresql.org/pgsql-hackers/2005-09/msg00174.php
o Allow DEFERRABLE and end-of-statement UNIQUE constraints?
This would allow UPDATE tab SET col = col + 1 to work if col has
a unique index. Currently, uniqueness checks are done while the
command is being executed, rather than at the end of the statement
or transaction.
http://people.planetpostgresql.org/greg/index.php?/archives/2006/06/10.html
http://archives.postgresql.org/pgsql-hackers/2006-09/msg01458.php
* Server-Side Languages
o PL/pgSQL
o Fix RENAME to work on variables other than OLD/NEW
o Allow function parameters to be passed by name,
get_employee_salary(12345 AS emp_id, 2001 AS tax_year)
o Add Oracle-style packages (Pavel)
A package would be a schema with session-local variables,
public/private functions, and initialization functions. It
is also possible to implement these capabilities
in all schemas and not use a separate "packages"
syntax at all.
http://archives.postgresql.org/pgsql-hackers/2006-08/msg00384.php
o Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]
o Allow listing of record column names, and access to
record columns via variables, e.g. columns := r.(*),
tval2 := r.(colname)
http://archives.postgresql.org/pgsql-patches/2005-07/msg00458.php
http://archives.postgresql.org/pgsql-patches/2006-05/msg00302.php
http://archives.postgresql.org/pgsql-patches/2006-06/msg00031.php
o Add MOVE
o Add single-step debugging of functions
o Add support for WITH HOLD and SCROLL cursors
PL/pgSQL cursors should support the same syntax as
backend cursors.
o Allow PL/RETURN to return row or record functions
http://archives.postgresql.org/pgsql-patches/2005-11/msg00045.php
o Fix memory leak from exceptions
http://archives.postgresql.org/pgsql-performance/2006-06/msg00305.php
o Fix problems with RETURN NEXT on tables with
dropped/added columns after function creation
http://archives.postgresql.org/pgsql-patches/2006-02/msg00165.php
o Other
o Add table function support to pltcl, plpython
o Add support for polymorphic arguments and return types to
languages other than PL/PgSQL
o Add capability to create and call PROCEDURES
o Add support for OUT and INOUT parameters to languages other
than PL/PgSQL
o Add PL/Python tracebacks
http://archives.postgresql.org/pgsql-patches/2006-02/msg00288.php
Clients
=======
* Have pg_ctl look at PGHOST in case it is a socket directory?
* Allow pg_ctl to work properly with configuration files located outside
the PGDATA directory
pg_ctl can not read the pid file because it isn't located in the
config directory but in the PGDATA directory. The solution is to
allow pg_ctl to read and understand postgresql.conf to find the
data_directory value.
* psql
o Have psql show current values for a sequence
o Move psql backslash database information into the backend, use
mnemonic commands? [psql]
This would allow non-psql clients to pull the same information out
of the database as psql.
o Fix psql's \d commands more consistent
http://archives.postgresql.org/pgsql-hackers/2004-11/msg00014.php
http://archives.postgresql.org/pgsql-hackers/2004-11/msg00014.php
o Allow psql \pset boolean variables to set to fixed values, rather
than toggle
o Consistently display privilege information for all objects in psql
o Add auto-expanded mode so expanded output is used if the row
length is wider than the screen width.
Consider using auto-expanded mode for backslash commands like \df+.
o Prevent tab completion of SET TRANSACTION from querying the
database and therefore preventing the transaction isolation
level from being set.
Currently, SET <tab> causes a database lookup to check all
supported session variables. This query causes problems
because setting the transaction isolation level must be the
first statement of a transaction.
* pg_dump
o %Add dumping of comments on index columns and composite type columns
o %Add full object name to the tag field. eg. for operators we need
'=(integer, integer)', instead of just '='.
o Add pg_dumpall custom format dumps?
o Remove unnecessary function pointer abstractions in pg_dump source
code
o Allow selection of individual object(s) of all types, not just
tables
o In a selective dump, allow dumping of an object and all its
dependencies
o Add options like pg_restore -l and -L to pg_dump
o Stop dumping CASCADE on DROP TYPE commands in clean mode
o Allow pg_dump --clean to drop roles that own objects or have
privileges
o Add -f to pg_dumpall
* ecpg
o Docs
Document differences between ecpg and the SQL standard and
information about the Informix-compatibility module.
o Solve cardinality > 1 for input descriptors / variables?
o Add a semantic check level, e.g. check if a table really exists
o fix handling of DB attributes that are arrays
o Use backend PREPARE/EXECUTE facility for ecpg where possible
o Implement SQLDA
o Fix nested C comments
o %sqlwarn[6] should be 'W' if the PRECISION or SCALE value specified
o Make SET CONNECTION thread-aware, non-standard?
o Allow multidimensional arrays
o Add internationalized message strings
o Implement COPY FROM STDIN
* libpq
o Add PQescapeIdentifierConn()
o Prevent PQfnumber() from lowercasing unquoted the column name
PQfnumber() should never have been doing lowercasing, but
historically it has so we need a way to prevent it
o Allow statement results to be automatically batched to the client
Currently, all statement results are transferred to the libpq
client before libpq makes the results available to the
application. This feature would allow the application to make
use of the first result rows while the rest are transferred, or
held on the server waiting for them to be requested by libpq.
One complexity is that a statement like SELECT 1/col could error
out mid-way through the result set.
* Fix SSL retry to avoid useless repeated connection attempts and
ensuing misleading error messages
Triggers
========
* Add deferred trigger queue file
Right now all deferred trigger information is stored in backend
memory. This could exhaust memory for very large trigger queues.
This item involves dumping large queues into files.
* Allow triggers to be disabled in only the current session.
This is currently possible by starting a multi-statement transaction,
modifying the system tables, performing the desired SQL, restoring the
system tables, and committing the transaction. ALTER TABLE ...
TRIGGER requires a table lock so it is not ideal for this usage.
* With disabled triggers, allow pg_dump to use ALTER TABLE ADD FOREIGN KEY
If the dump is known to be valid, allow foreign keys to be added
without revalidating the data.
* Allow statement-level triggers to access modified rows
* Support triggers on columns
http://archives.postgresql.org/pgsql-patches/2005-07/msg00107.php
* Allow AFTER triggers on system tables
System tables are modified in many places in the backend without going
through the executor and therefore not causing triggers to fire. To
complete this item, the functions that modify system tables will have
to fire triggers.
Dependency Checking
===================
* Flush cached query plans when the dependent objects change,
when the cardinality of parameters changes dramatically, or
when new ANALYZE statistics are available
A more complex solution would be to save multiple plans for different
cardinality and use the appropriate plan based on the EXECUTE values.
* Track dependencies in function bodies and recompile/invalidate
This is particularly important for references to temporary tables
in PL/PgSQL because PL/PgSQL caches query plans. The only workaround
in PL/PgSQL is to use EXECUTE. One complexity is that a function
might itself drop and recreate dependent tables, causing it to
invalidate its own query plan.
Indexes
=======
* Allow inherited tables to inherit index, UNIQUE constraint, and primary
key, foreign key
* UNIQUE INDEX on base column not honored on INSERTs/UPDATEs from
inherited table: INSERT INTO inherit_table (unique_index_col) VALUES
(dup) should fail
The main difficulty with this item is the problem of creating an index
that can span more than one table.
* Allow SELECT ... FOR UPDATE on inherited tables
* Add UNIQUE capability to non-btree indexes
* Prevent index uniqueness checks when UPDATE does not modify the column
Uniqueness (index) checks are done when updating a column even if the
column is not modified by the UPDATE.
* Allow the creation of on-disk bitmap indexes which can be quickly
combined with other bitmap indexes
Such indexes could be more compact if there are only a few distinct values.
Such indexes can also be compressed. Keeping such indexes updated can be
costly.
http://archives.postgresql.org/pgsql-patches/2005-07/msg00512.php
* Allow use of indexes to search for NULLs
One solution is to create a partial index on an IS NULL expression.
* Allow accurate statistics to be collected on indexes with more than
one column or expression indexes, perhaps using per-index statistics
* Allow the creation of indexes with mixed ascending/descending
specifiers
This is possible now by creating an operator class with reversed sort
operators. One complexity is that NULLs would then appear at the start
of the result set, and this might affect certain sort types, like
merge join.
* Allow constraint_exclusion to work for UNIONs like it does for
inheritance, allow it to work for UPDATE and DELETE statements, and allow
it to be used for all statements with little performance impact
* Allow CREATE INDEX to take an additional parameter for use with
special index types
* Consider compressing indexes by storing key values duplicated in
several rows as a single index entry
This is difficult because it requires datatype-specific knowledge.
* GIST
o Add more GIST index support for geometric data types
o Allow GIST indexes to create certain complex index types, like
digital trees (see Aoki)
* Hash
o Pack hash index buckets onto disk pages more efficiently
Currently only one hash bucket can be stored on a page. Ideally
several hash buckets could be stored on a single page and greater
granularity used for the hash algorithm.
o Consider sorting hash buckets so entries can be found using a
binary search, rather than a linear scan
o In hash indexes, consider storing the hash value with or instead
of the key itself
o Add WAL logging for crash recovery
o Allow multi-column hash indexes
Fsync
=====
* Improve commit_delay handling to reduce fsync()
* Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options
Ideally this requires a separate test program that can be run
at initdb time or optionally later. Consider O_SYNC when
O_DIRECT exists.
* %Add an option to sync() before fsync()'ing checkpoint files
* Add program to test if fsync has a delay compared to non-fsync
Cache Usage
===========
* Allow free-behind capability for large sequential scans, perhaps using
posix_fadvise()
Posix_fadvise() can control both sequential/random file caching and
free-behind behavior, but it is unclear how the setting affects other
backends that also have the file open, and the feature is not supported
on all operating systems.
* Speed up COUNT(*)
We could use a fixed row count and a +/- count to follow MVCC
visibility rules, or a single cached value could be used and
invalidated if anyone modifies the table. Another idea is to
get a count directly from a unique index, but for this to be
faster than a sequential scan it must avoid access to the heap
to obtain tuple visibility information.
* Add estimated_count(*) to return an estimate of COUNT(*)
This would use the planner ANALYZE statistics to return an estimated
count.
http://archives.postgresql.org/pgsql-hackers/2005-11/msg00943.php
* Allow data to be pulled directly from indexes
Currently indexes do not have enough tuple visibility information
to allow data to be pulled from the index without also accessing
the heap. One way to allow this is to set a bit on index tuples
to indicate if a tuple is currently visible to all transactions
when the first valid heap lookup happens. This bit would have to
be cleared when a heap tuple is expired.
Another idea is to maintain a bitmap of heap pages where all rows
are visible to all backends, and allow index lookups to reference
that bitmap to avoid heap lookups, perhaps the same bitmap we might
add someday to determine which heap pages need vacuuming. Frequently
accessed bitmaps would have to be stored in shared memory. One 8k
page of bitmaps could track 512MB of heap pages.
* Consider automatic caching of statements at various levels:
o Parsed query tree
o Query execute plan
o Query results
* Allow sequential scans to take advantage of other concurrent
sequential scans, also called "Synchronised Scanning"
One possible implementation is to start sequential scans from the lowest
numbered buffer in the shared cache, and when reaching the end wrap
around to the beginning, rather than always starting sequential scans
at the start of the table.
* Consider increasing internal areas when shared buffers is increased
http://archives.postgresql.org/pgsql-hackers/2005-10/msg01419.php
Vacuum
======
* Improve speed with indexes
For large table adjustments during VACUUM FULL, it is faster to
reindex rather than update the index.
* Reduce lock time during VACUUM FULL by moving tuples with read lock,
then write lock and truncate table
Moved tuples are invisible to other backends so they don't require a
write lock. However, the read lock promotion to write lock could lead
to deadlock situations.
* Auto-fill the free space map by scanning the buffer cache or by
checking pages written by the background writer
http://archives.postgresql.org/pgsql-hackers/2006-02/msg01125.php
http://archives.postgresql.org/pgsql-hackers/2006-03/msg00011.php
* Create a bitmap of pages that need vacuuming
Instead of sequentially scanning the entire table, have the background
writer or some other process record pages that have expired rows, then
VACUUM can look at just those pages rather than the entire table. In
the event of a system crash, the bitmap would probably be invalidated.
One complexity is that index entries still have to be vacuumed, and
doing this without an index scan (by using the heap values to find the
index entry) might be slow and unreliable, especially for user-defined
index functions.
* Allow FSM to return free space toward the beginning of the heap file,
in hopes that empty pages at the end can be truncated by VACUUM
* Allow FSM page return free space based on table clustering, to assist
in maintaining clustering?
* Consider shrinking expired tuples to just their headers
http://archives.postgresql.org/pgsql-patches/2006-03/msg00142.php
* Allow heap reuse of UPDATEd rows if no indexed columns are changed,
and old and new versions are on the same heap page?
While vacuum handles DELETEs fine, updating of non-indexed columns, like
counters, are difficult for VACUUM to handle efficiently. This method
is possible for same-page updates because a single index row can be
used to point to both old and new values.
http://archives.postgresql.org/pgsql-hackers/2006-06/msg01305.php
http://archives.postgresql.org/pgsql-hackers/2006-06/msg01534.php
* Reuse index tuples that point to heap tuples that are not visible to
anyone?
* Auto-vacuum
o Use free-space map information to guide refilling
o %Issue log message to suggest VACUUM FULL if a table is nearly
empty?
o Consider logging activity either to the logs or a system view
o Turn on by default
http://archives.postgresql.org/pgsql-hackers/2006-08/msg01852.php
Locking
=======
* Fix priority ordering of read and write light-weight locks (Neil)
Startup Time Improvements
=========================
* Experiment with multi-threaded backend for backend creation [thread]
This would prevent the overhead associated with process creation. Most
operating systems have trivial process creation time compared to
database startup overhead, but a few operating systems (Win32,
Solaris) might benefit from threading. Also explore the idea of
a single session using multiple threads to execute a statement faster.
* Experiment with multi-threaded backend better resource utilization
This would allow a single query to make use of multiple CPU's or
multiple I/O channels simultaneously. One idea is to create a
background reader that can pre-fetch sequential and index scan
pages needed by other backends. This could be expanded to allow
concurrent reads from multiple devices in a partitioned table.
* Add connection pooling
It is unclear if this should be done inside the backend code or done
by something external like pgpool. The passing of file descriptors to
existing backends is one of the difficulties with a backend approach.
Write-Ahead Log
===============
* Eliminate need to write full pages to WAL before page modification [wal]
Currently, to protect against partial disk page writes, we write
full page images to WAL before they are modified so we can correct any
partial page writes during recovery. These pages can also be
eliminated from point-in-time archive files.
o When off, write CRC to WAL and check file system blocks
on recovery
If CRC check fails during recovery, remember the page in case
a later CRC for that page properly matches.
o Write full pages during file system write and not when
the page is modified in the buffer cache
This allows most full page writes to happen in the background
writer. It might cause problems for applying WAL on recovery
into a partially-written page, but later the full page will be
replaced from WAL.
* Allow WAL traffic to be streamed to another server for stand-by
replication
* Reduce WAL traffic so only modified values are written rather than
entire rows?
* Allow the pg_xlog directory location to be specified during initdb
with a symlink back to the /data location
* Allow WAL information to recover corrupted pg_controldata
http://archives.postgresql.org/pgsql-patches/2006-06/msg00025.php
* Find a way to reduce rotational delay when repeatedly writing
last WAL page
Currently fsync of WAL requires the disk platter to perform a full
rotation to fsync again. One idea is to write the WAL to different
offsets that might reduce the rotational delay.
* Allow buffered WAL writes and fsync
Instead of guaranteeing recovery of all committed transactions, this
would provide improved performance by delaying WAL writes and fsync
so an abrupt operating system restart might lose a few seconds of
committed transactions but still be consistent. We could perhaps
remove the 'fsync' parameter (which results in an an inconsistent
database) in favor of this capability.
* Allow WAL logging to be turned off for a table, but the table
might be dropped or truncated during crash recovery [walcontrol]
Allow tables to bypass WAL writes and just fsync() dirty pages on
commit. This should be implemented using ALTER TABLE, e.g. ALTER
TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ]. Tables using
non-default logging should not use referential integrity with
default-logging tables. A table without dirty buffers during a
crash could perhaps avoid the drop/truncate.
* Allow WAL logging to be turned off for a table, but the table would
avoid being truncated/dropped [walcontrol]
To do this, only a single writer can modify the table, and writes
must happen only on new pages so the new pages can be removed during
crash recovery. Readers can continue accessing the table. Such
tables probably cannot have indexes. One complexity is the handling
of indexes on TOAST tables.
Optimizer / Executor
====================
* Improve selectivity functions for geometric operators
* Allow ORDER BY ... LIMIT # to select high/low value without sort or
index using a sequential scan for highest/lowest values
Right now, if no index exists, ORDER BY ... LIMIT # requires we sort
all values to return the high/low value. Instead The idea is to do a
sequential scan to find the high/low value, thus avoiding the sort.
MIN/MAX already does this, but not for LIMIT > 1.
* Precompile SQL functions to avoid overhead
* Create utility to compute accurate random_page_cost value
* Improve ability to display optimizer analysis using OPTIMIZER_DEBUG
* Have EXPLAIN ANALYZE issue NOTICE messages when the estimated and
actual row counts differ by a specified percentage
* Consider using hash buckets to do DISTINCT, rather than sorting
This would be beneficial when there are few distinct values. This is
already used by GROUP BY.
* Log statements where the optimizer row estimates were dramatically
different from the number of rows actually found?
* Consider compressed annealing to search for query plans
This might replace GEQO, http://sixdemonbag.org/Djinni.
Miscellaneous Performance
=========================
* Do async I/O for faster random read-ahead of data
Async I/O allows multiple I/O requests to be sent to the disk with
results coming back asynchronously.
http://archives.postgresql.org/pgsql-hackers/2006-10/msg00820.php
* Use mmap() rather than SYSV shared memory or to write WAL files?
This would remove the requirement for SYSV SHM but would introduce
portability issues. Anonymous mmap (or mmap to /dev/zero) is required
to prevent I/O overhead.
* Consider mmap()'ing files into a backend?
Doing I/O to large tables would consume a lot of address space or
require frequent mapping/unmapping. Extending the file also causes
mapping problems that might require mapping only individual pages,
leading to thousands of mappings. Another problem is that there is no
way to _prevent_ I/O to disk from the dirty shared buffers so changes
could hit disk before WAL is written.
* Add a script to ask system configuration questions and tune postgresql.conf
* Merge xmin/xmax/cmin/cmax back into three header fields
Before subtransactions, there used to be only three fields needed to
store these four values. This was possible because only the current
transaction looks at the cmin/cmax values. If the current transaction
created and expired the row the fields stored where xmin (same as
xmax), cmin, cmax, and if the transaction was expiring a row from a
another transaction, the fields stored were xmin (cmin was not
needed), xmax, and cmax. Such a system worked because a transaction
could only see rows from another completed transaction. However,
subtransactions can see rows from outer transactions, and once the
subtransaction completes, the outer transaction continues, requiring
the storage of all four fields. With subtransactions, an outer
transaction can create a row, a subtransaction expire it, and when the
subtransaction completes, the outer transaction still has to have
proper visibility of the row's cmin, for example, for cursors.
One possible solution is to create a phantom cid which represents a
cmin/cmax pair and is stored in local memory. Another idea is to
store both cmin and cmax only in local memory.
* Consider ways of storing rows more compactly on disk
o Support a smaller header for short variable-length fields?
One idea is to create zero-or-one-byte-header versions
of varlena data types. In involves setting the high-bit and
0-127 length in the single-byte header, or clear the high bit
and store the 7-bit ASCII value in the rest of the byte.
The small-header versions have no alignment requirements.
http://archives.postgresql.org/pgsql-hackers/2006-09/msg01372.php
o Reduce the row header size?
Source Code
===========
* Add use of 'const' for variables in source tree
* Move some things from /contrib into main tree
* Move some /contrib modules out to their own project sites
Particularly, move GPL-licensed /contrib/userlock and
/contrib/dbmirror/clean_pending.pl.
* %Remove warnings created by -Wcast-align
* Move platform-specific ps status display info from ps_status.c to ports
* Add optional CRC checksum to heap and index pages
* Improve documentation to build only interfaces (Marc)
* Remove or relicense modules that are not under the BSD license, if possible
* %Remove memory/file descriptor freeing before ereport(ERROR)
* Acquire lock on a relation before building a relcache entry for it
* %Promote debug_query_string into a server-side function current_query()
* %Allow the identifier length to be increased via a configure option
* Allow cross-compiling by generating the zic database on the target system
* Improve NLS maintenance of libpgport messages linked onto applications
* Allow ecpg to work with MSVC and BCC
* Add xpath_array() to /contrib/xml2 to return results as an array
* Allow building in directories containing spaces
This is probably not possible because 'gmake' and other compiler tools
do not fully support quoting of paths with spaces.
* Fix sgmltools so PDFs can be generated with bookmarks
* Use UTF8 encoding for NLS messages so all server encodings can
read them properly
* Update Bonjour to work with newer cross-platform SDK
* Split out libpq pgpass and environment documentation sections to make
it easier for non-developers to find
* Consider detoasting keys before sorting
* Consider GnuTLS if OpenSSL license becomes a problem
http://archives.postgresql.org/pgsql-patches/2006-05/msg00040.php
* Use strlcpy() rather than our StrNCpy() macro
http://archives.postgresql.org/pgsql-hackers/2006-09/msg02108.php
* Consider changing documentation format from SGML to XML
http://archives.postgresql.org/pgsql-docs/2006-12/msg00152.php
* Win32
o Remove configure.in check for link failure when cause is found
o Remove readdir() errno patch when runtime/mingwex/dirent.c rev
1.4 is released
o Remove psql newline patch when we find out why mingw outputs an
extra newline
o Allow psql to use readline once non-US code pages work with
backslashes
o Re-enable timezone output on log_line_prefix '%t' when a
shorter timezone string is available
o Fix problem with shared memory on the Win32 Terminal Server
o Improve signal handling
http://archives.postgresql.org/pgsql-patches/2005-06/msg00027.php
o Add long file support for binary pg_dump output
While Win32 supports 64-bit files, the MinGW API does not,
meaning we have to build an fseeko replacement on top of the
Win32 API, and we have to make sure MinGW handles it. Another
option is to wait for the MinGW project to fix it, or use the
code from the LibGW32C project as a guide.
o Check WSACancelBlockingCall() for interrupts [win32intr]
* Wire Protocol Changes
o Allow dynamic character set handling
o Add decoded type, length, precision
o Use compression?
o Update clients to use data types, typmod, schema.table.column names
of result sets using new statement protocol
Exotic Features
===============
* Add pre-parsing phase that converts non-ISO syntax to supported
syntax
This could allow SQL written for other databases to run without
modification.
* Allow plug-in modules to emulate features from other databases
* SQL*Net listener that makes PostgreSQL appear as an Oracle database
to clients
* Allow statements across databases or servers with transaction
semantics
This can be done using dblink and two-phase commit.
* Add the features of packages
o Make private objects accessible only to objects in the same schema
o Allow current_schema.objname to access current schema objects
o Add session variables
o Allow nested schemas
* Consider allowing control of upper/lower case folding of unquoted
identifiers
http://archives.postgresql.org/pgsql-hackers/2004-04/msg00818.php
http://archives.postgresql.org/pgsql-hackers/2006-10/msg01527.php
Features We Do _Not_ Want
=========================
* All backends running as threads in a single process (not wanted)
This eliminates the process protection we get from the current setup.
Thread creation is usually the same overhead as process creation on
modern systems, so it seems unwise to use a pure threaded model.
* Optimizer hints (not wanted)
Optimizer hints are used to work around problems in the optimizer. We
would rather have the problems reported and fixed.
http://archives.postgresql.org/pgsql-hackers/2006-08/msg00506.php
http://archives.postgresql.org/pgsql-hackers/2006-10/msg00517.php
http://archives.postgresql.org/pgsql-hackers/2006-10/msg00663.php
* Allow AS in "SELECT col AS label" to be optional (not wanted)
Because we support postfix operators, it isn't possible to make AS
optional and continue to use bison.
http://archives.postgresql.org/pgsql-sql/2006-08/msg00164.php
* Embedded server (not wanted)
While PostgreSQL clients runs fine in limited-resource environments, the
server requires multiple processes and a stable pool of resources to
run reliabily and efficiently. Stripping down the PostgreSQL server
to run in the same process address space as the client application
would add too much complexity and failure cases.
---------------------------------------------------------------------------
Developers who have claimed items are:
--------------------------------------
* Alvaro is Alvaro Herrera <alvherre@dcc.uchile.cl>
* Andrew is Andrew Dunstan <andrew@dunslane.net>
* Bruce is Bruce Momjian <bruce@momjian.us> of EnterpriseDB
* Christopher is Christopher Kings-Lynne <chriskl@familyhealth.com.au> of
Family Health Network
* D'Arcy is D'Arcy J.M. Cain <darcy@druid.net> of The Cain Gang Ltd.
* David is David Fetter <david@fetter.org>
* Fabien is Fabien Coelho <coelho@cri.ensmp.fr>
* Gavin is Gavin Sherry <swm@linuxworld.com.au> of Alcove Systems Engineering
* Greg is Greg Sabino Mullane <greg@turnstep.com>
* Jan is Jan Wieck <JanWieck@Yahoo.com> of Afilias, Inc.
* Joe is Joe Conway <mail@joeconway.com>
* Karel is Karel Zak <zakkr@zf.jcu.cz>
* Magnus is Magnus Hagander <mha@sollentuna.net>
* Marc is Marc Fournier <scrappy@hub.org> of PostgreSQL, Inc.
* Matthew T. O'Connor <matthew@zeut.net>
* Michael is Michael Meskes <meskes@postgresql.org> of Credativ
* Neil is Neil Conway <neilc@samurai.com>
* Oleg is Oleg Bartunov <oleg@sai.msu.su>
* Pavel is Pavel Stehule <pavel.stehule@hotmail.com>
* Peter is Peter Eisentraut <peter_e@gmx.net>
* Philip is Philip Warner <pjw@rhyme.com.au> of Albatross Consulting Pty. Ltd.
* Rod is Rod Taylor <pg@rbt.ca>
* Simon is Simon Riggs <simon@2ndquadrant.com>
* Stephan is Stephan Szabo <sszabo@megazone23.bigpanda.com>
* Tatsuo is Tatsuo Ishii <ishii@sraoss.co.jp> of SRA OSS, Inc. Japan
* Teodor is Teodor Sigaev <teodor@sigaev.ru>
* Tom is Tom Lane <tgl@sss.pgh.pa.us> of Red Hat