Hiroshi. ReleaseRelationBuffers now removes rel's buffers from pool,
instead of merely marking them nondirty. The old code would leave valid
buffers for a deleted relation, which didn't cause any known problems
but can't possibly be a good idea. There were several places which called
ReleaseRelationBuffers *and* FlushRelationBuffers, which is now
unnecessary; but there were others that did not. FlushRelationBuffers
no longer emits a warning notice if it finds dirty buffers to flush,
because with the current bufmgr behavior that's not an unexpected
condition. Also, FlushRelationBuffers will flush out all dirty buffers
for the relation regardless of block number. This ensures that
pg_upgrade's expectations are met about tuple on-row status bits being
up-to-date on disk. Lastly, tweak BufTableDelete() to clear the
buffer's tag so that no one can mistake it for being a still-valid
buffer for the page it once held. Formerly, the buffer would not be
found by buffer hashtable searches after BufTableDelete(), but it would
still be thought to belong to its old relation by the routines that
sequentially scan the shared-buffer array. Again I know of no bugs
caused by that, but it still can't be a good idea.
RowExclusive (my fault). Also, install a check to prevent people
from trying COPY BINARY to stdout/from stdin. No way that will
work unless we redesign the frontend COPY protocol ... which is
not worth the trouble in the near future ...
days. It seems to be a FAQ, and I think I know why. When creating a 'c'
language function, CREATE FUNCTION is fed the shared object filename,
and seems to succeed. Only when trying to use the function is an error
thrown, by which time the coder thinks something's wrong with executing
the code, not with loading it.
I think I once saw it proposed to load shared objects at function creation
time, but that idea was shot down on the grounds of resident memory bloat,
ISTR. Here's a patch for a compromise: all it does is stat() the file,
just like the loader code does, so that the errors caused by non existent
files, and no directory 'x' permissions (the most common ones, it seems),
get caught while the developer is still thinking about code loading. It
doesn't catch all errors (like the code not being readable by the postgres
user) but seems to catch the most common, without actually opening the file.
What do you think?
Ross
indexes, apparently, nor on functional indexes with more than one input
column (force of natts = 1 was in the wrong branch of IF statement).
Coredumped if source relation contained any uncommitted tuples, due to
failure to test for success return from heap_fetch. Fetched tuple
was passed directly to heap_insert, which clobbers the TID and commit
status in the tuple header it's given, which meant that the source
relation's tuples all got trashed as the copy proceeded. Abort partway
through, and you're left with a lot of missing tuples.
I wonder what else is lurking here ...
Add a random number generator and seed setter (random(), SET SEED)
Fix up the interval*float8 math to carry partial months
into the time field.
Add float8*interval so we have symmetry in the available math.
Fix the parser and define.c to accept SQL92 types as field arguments.
Fix the parser to accept SQL92 types for CREATE TYPE, etc. This is
necessary to allow...
Bit/varbit support in contrib/bit cleaned up to compile and load
cleanly. Still needs some work before final release.
Implement the "SOME" keyword as a synonym for "ANY" per SQL92.
Implement ascii(text), ichar(int4), repeat(text,int4) to help
support the ODBC driver.
Enable the TRUNCATE() function mapping in the ODBC driver.
running gcc and HP's cc with warnings cranked way up. Signed vs unsigned
comparisons, routines declared static and then defined not-static,
that kind of thing. Tedious, but perhaps useful...
CREATE DB/DROP DB. If you didn't think they were wrong, try what
happens when you compile with -DCLOBBER_FREED_MEMORY --- database
name displayed in error messages is trashed, because transaction
abort freed it. Also, remove trailing periods in error messages,
per our prevailing style.
Implement TIME WITH TIME ZONE type (timetz internal type).
Remap length() for character strings to CHAR_LENGTH() for SQL92
and to remove the ambiguity with geometric length() functions.
Keep length() for character strings for backward compatibility.
Shrink stored views by removing internal column name list from visible rte.
Implement min(), max() for time and timetz data types.
Implement conversion of TIME to INTERVAL.
Implement abs(), mod(), fac() for the int8 data type.
Rename some math functions to generic names:
round(), sqrt(), cbrt(), pow(), etc.
Rename NUMERIC power() function to pow().
Fix int2 factorial to calculate result in int4.
Enhance the Oracle compatibility function translate() to work with string
arguments (from Edwin Ramirez).
Modify pg_proc system table to remove OID holes.
variable, instead calling same code in variable.c that is used to parse
SET DATESTYLE. Fix bug: although backend's startup datestyle had been
changed to ISO, 'RESET DATESTYLE' and 'SET DATESTYLE TO DEFAULT' didn't
know about it. For consistency I have made the latter two reset to the
PGDATESTYLE-defined initial value, which may not be the same as the
compiled-in default of ISO.
accesses versus sequential accesses, a (very crude) estimate of the
effects of caching on random page accesses, and cost to evaluate WHERE-
clause expressions. Export critical parameters for this model as SET
variables. Also, create SET variables for the planner's enable flags
(enable_seqscan, enable_indexscan, etc) so that these can be controlled
more conveniently than via PGOPTIONS.
Planner now estimates both startup cost (cost before retrieving
first tuple) and total cost of each path, so it can optimize queries
with LIMIT on a reasonable basis by interpolating between these costs.
Same facility is a win for EXISTS(...) subqueries and some other cases.
Redesign pathkey representation to achieve a major speedup in planning
(I saw as much as 5X on a 10-way join); also minor changes in planner
to reduce memory consumption by recycling discarded Path nodes and
not constructing unnecessary lists.
Minor cleanups to display more-plausible costs in some cases in
EXPLAIN output.
Initdb forced by change in interface to index cost estimation
functions.
SELECT a FROM t1 tx (a);
Allow join syntax, including queries like
SELECT * FROM t1 NATURAL JOIN t2;
Update RTE structure to hold column aliases in an Attr structure.
this is an old patch which I have already submitted and never seen
in the sources. It corrects the datatype oids used in some iterator
functions. This bug has been reported to me by many other people.
contrib-datetime.patch
some code contributed by Reiner Dassing <dassing@wettzell.ifag.de>
contrib-makefiles.patch
fixes all my contrib makefiles which don't work with some compilers,
as reported to me by another user.
contrib-miscutil.patch
an old patch for one of my old contribs.
contrib-string.patch
a small change to the c-like text output functions. Now the '{'
is escaped only at the beginning of the string to distinguish it
from arrays, and the '}' is no more escaped.
elog-lineno.patch
adds the current lineno of CopyFrom to elog messages. This is very
useful when you load a 1 million tuples table from an external file
and there is a bad value somehere. Currently you get an error message
but you can't know where is the bad data. The patch uses a variable
which was declared static in copy.c. The variable is now exported
and initialized to 0. It is always cleared at the end of the copy
or at the first elog message or when the copy is canceled.
I know this is very ugly but I can't find any better way of knowing
where the copy fails and I have this problem quite often.
plperl-makefile.patch
fixes a typo in a makefile, but the error must be elsewhere because
it is a file generated automatically. Please have a look.
tprintf-timestamp.patch
restores the original 2-digit year format, assuming that the two
century digits don't carry much information and that '000202' is
easier to read than 20000202. Being only a log file it shouldn't
break anything.
Please apply the patches before the next scheduled code freeze.
I also noticed that some of the contribs don't compile correcly. Should we
ask people to fix their code or rename their makefiles so that they are
ignored by the top makefile?
--
Massimo Dal Zotto
a switch statement that has an empty default label. A label of a
switch statement must be followed by a statement (or a label which
is followed by a statement (or a label which ...)).
3. Files include stringinfo.h failed to compile. The macro,
'appendStringInfoCharMacro' is implemented with a '?:' operation
that returns a void expression for the true part and a char expresion
for the false part. Both the true and false parts of the '?:' oper-
ator must return the same type.
Billy G. Allie
Added constraint dumping capability to pg_dump (also from Stephan)
Fixed DROP TABLE -> RelationBuildTriggers: 2 record(s) not found for rel
error.
Fixed little error in gram.y I made the last days.
Jan
syscache and relcache flushes). Relcache entry rebuild now preserves
original tupledesc, rewrite rules, and triggers if possible, so that pointers
to these things remain valid --- if these things change while relcache entry
has positive refcount, we elog(ERROR) to avoid later crash. Arrange for
xact-local rels to be rebuilt when an SI inval message is seen for them,
so that they are updated by CommandCounterIncrement the same as regular rels.
(This is useful because of Hiroshi's recent changes to process our own SI
messages at CommandCounterIncrement time.) This allows simplification of
some routines that previously hacked around the lack of an automatic update.
catcache now keeps its own copy of tupledesc for its relation, rather than
depending on the relcache's copy; this avoids needing to reinitialize catcache
during a cache flush, which saves some cycles and eliminates nasty circularity
problems that occur if a cache flush happens while trying to initialize a
catcache.
Eliminate a number of permanent memory leaks that used to happen during
catcache or relcache flush; not least of which was that catcache never
freed any cached tuples! (Rule parsetree storage is still leaked, however;
will fix that separately.)
Nothing done yet about code that uses tuples retrieved by SearchSysCache
for longer than is safe.
Initdb help correction
Changed end/abort to commit/rollback and changed related notices
Commented out way old printing functions in libpq
Fixed a typo in alter table / alter column
pghackers discussion of 5-Jan-2000. The amopselect and amopnpages
estimators are gone, and in their place is a per-AM amcostestimate
procedure (linked to from pg_am, not pg_amop).
Attached is a small fix for a stupid mistake I made in comment.c
- an attempt to drop a non-existent comment would dump core :-(.
Sometimes, I'm as sharp as a marble.
Sorry,
Mike Mascari
from a constraint condition does not violate the constraint (cf. discussion
on pghackers 12/9/99). Implemented by adding a parameter to ExecQual,
specifying whether to return TRUE or FALSE when the qual result is
really NULL in three-valued boolean logic. Currently, ExecRelCheck is
the only caller that asks for TRUE, but if we find any other places that
have the wrong response to NULL, it'll be easy to fix them.
Attached is a patch which patches cleanly against the Sunday afternoon
snapshot. It modifies pg_dump to dump COMMENT ON statements for
user-definable descriptions. In addition, it also modifies comment.c so
that the operator behavior is as Peter E. would like: a comment on an
operator is applied to the underlying function.
Thanks,
Mike Mascari
read is reused for successive attributes, instead of being deleted and
recreated from scratch for each value read in. This reduces palloc/pfree
overhead a lot. COPY IN still seems to be noticeably slower than it was
in 6.5 --- we need to figure out why. This change takes care of the only
major performance loss I can see in copy.c itself, so the performance
problem is at a lower level somewhere.
functions, which would lead to trouble with datatypes that paid attention
to the typelem or typmod parameters to these functions. In particular,
incorrect code in pg_aggregate.c explains the platform-specific failures
that have been reported in NUMERIC avg().
* Let unprivileged users change their own passwords.
* The password is now an Sconst in the parser, which better reflects its text datatype and also
forces users to quote them.
* If your password is NULL you won't be written to the password file, meaning you can't connect
until you have a password set up (if you use password authentication).
* When you drop a user that owns a database you get an error. The database is not gone.
errors. VACUUM normally compacts the table back-to-front, and stops
as soon as it gets to a page that it has moved some tuples onto.
(This logic doesn't make for a complete packing of the table, but it
should be pretty close.) But the way it was checking whether it had
got to a page with some moved-in tuples was to look at whether the
current page was the same as the last page of the list of pages that
have enough free space to be move-in targets. And there was other
code that would remove pages from that list once they got full.
There was a kluge that prevented the last list entry from being
removed, but it didn't get the job done. Fixed by keeping a separate
variable that contains the largest block number into which a tuple
has been moved. There's no longer any need to protect the last element
of the fraged_pages list.
Also, fix NOTICE messages to describe elapsed user/system CPU time
correctly.
relcache entry no longer leaks a small amount of memory. index_endscan
now releases all the memory acquired by index_beginscan, so callers of it
should NOT pfree the scan descriptor anymore.
didn't have time for documentation yet, but I'll write some. There are
still some things to work out what happens when you alter or drop users,
but the group stuff in and by itself is done.
--
Peter Eisentraut Sernanders väg 10:115
triggered
> function now returns the right datatype.
Oops, I got crossed up with Jan's improvements. Ignore this.
--
Peter Eisentraut Sernanders väg 10:115
peter_e@gmx.net 75262 Uppsala
anywhere from zero to two TODO items.
* Allow flag to control COPY input/output of NULLs
I got this:
COPY table .... [ WITH NULL AS 'string' ]
which does what you'd expect. The default is \N, otherwise you can use
empty strings, etc. On Copy In this acts like a filter: every data item
that looks like 'string' becomes a NULL. Pretty straightforward.
This also seems to be related to
* Make postgres user have a password by default
If I recall this discussion correctly, the problem was actually that the
default password for the postgres (or any) user is in fact "\N", because
of the way copy is used. With this change, the file pg_pwd is copied out
with nulls as empty strings, so if someone doesn't have a password, the
password is just '', which one would expect from a new account. I don't
think anyone really wants a hard-coded default password.
Peter Eisentraut Sernanders väg 10:115
* Document/trigger/rule so changes to pg_shadow recreate pg_pwd
I did it with a trigger and it seems to work like a charm. The function
that already updates the file for create and alter user has been made a
built-in "SQL" function and a trigger is created at initdb time.
Comments around the pg_pwd updating function seem to be worried about
this
routine being called concurrently, but I really don't see a reason to
worry about this. Verify for yourself. I guess we never had a system
trigger before, so treat this with care, and feel free to adjust the
nomenclature as well.
--
Peter Eisentraut Sernanders väg 10:115
at all, and because of shell quoting rules this can't be fixed, so I put
in error messages to that end.
Also, calling create or drop database in a transaction block is not so
good either, because the file system mysteriously refuses to roll back rm
calls on transaction aborts. :) So I put in checks to see if a transaction
is in progress and signal an error.
Also I put the whole call in a transaction of its own to be able to roll
back changes to pg_database in case the file system operations fail.
The alternative location issues I posted recently were untouched, awaiting
the outcome of that discussion. Other than that, this should be much more
fool-proof now.
The docs I cleaned up as well.
Peter Eisentraut Sernanders väg 10:115
This one should work much better than the one I sent in previously. The
functionality is the same, but the patch was missing one file resulting
in
the compilation failing. The docs also received a minor fix.
Peter Eisentraut Sernanders väg 10:115
table owner in order to vacuum a table. This is mainly to prevent
denial-of-service attacks via repeated vacuums. Allow VACUUM to gather
statistics about system relations, except for pg_statistic itself ---
not clear that it's worth the trouble to make that case work cleanly.
Cope with possible tuple size overflow in pg_statistic tuples; I'm
surprised we never realized that could happen. Hold a couple of locks
a little longer to try to prevent deadlocks between concurrent VACUUMs.
There still seem to be some problems in that last area though :-(
parallel --- and, not incidentally, removing a common reason for needing
manual cleanup by the DB admin after a crash. Remove initial global
delete of pg_statistics rows in VACUUM ANALYZE; this was not only bad
for performance of other backends that had to run without stats for a
while, but it was fundamentally broken because it was done outside any
transaction. Surprising we didn't see more consequences of that.
Detect attempt to run VACUUM inside a transaction block. Check for
query cancel request before starting vacuum of each table. Clean up
vacuum's private portal storage if vacuum is aborted.
Make all system indexes unique.
Make all cache loads use system indexes.
Rename *rel to *relid in inheritance tables.
Rename cache names to be clearer.
(whoever thought world-writable files were a good default????). Modify
the pg_pwd code so that pg_pwd is created with 600 permissions. Modify
initdb so that permissions on a pre-existing PGDATA directory are not
blindly accepted: if the dir is already there, it does chmod go-rwx
to be sure that the permissions are OK and the dir actually is owned
by postgres.
The following patch extends the COMMENT ON functionality to the
rest of the database objects beyond just tables, columns, and views. The
grammer of the COMMENT ON statement now looks like:
COMMENT ON [
[ DATABASE | INDEX | RULE | SEQUENCE | TABLE | TYPE | VIEW ] <objname>
|
COLUMN <relation>.<attribute> |
AGGREGATE <aggname> <aggtype> |
FUNCTION <funcname> (arg1, arg2, ...) |
OPERATOR <op> (leftoperand_typ rightoperand_typ) |
TRIGGER <triggername> ON relname>
Mike Mascari
(mascarim@yahoo.com)
eliminating some wildly inconsistent coding in various parts of the
system. I set MAXPGPATH = 1024 in config.h.in. If anyone is really
convinced that there ought to be a configure-time test to set the
value, go right ahead ... but I think it's a waste of time.
>From the ORACLE 7 SQL Language Reference Manual:
-----------------------------------------------------
COMMENT
Purpose:
To add a comment about a table, view, snapshot, or
column into the data dictionary.
Prerequisites:
The table, view, or snapshot must be in your own
schema
or you must have COMMENT ANY TABLE system privilege.
Syntax:
COMMENT ON [ TABLE table ] |
[ COLUMN table.column] IS 'text'
You can effectively drop a comment from the database
by setting it to the empty string ''.
-----------------------------------------------------
Example:
COMMENT ON TABLE workorders IS
'Maintains base records for workorder information';
COMMENT ON COLUMN workorders.hours IS
'Number of hours the engineer worked on the task';
to drop a comment:
COMMENT ON COLUMN workorders.hours IS '';
The current patch will simply perform the insert into
pg_description, as per the TODO. And, of course, when
the table is dropped, any comments relating to it
or any of its attributes are also dropped. I haven't
looked at the ODBC source yet, but I do know from
an ODBC client standpoint that the standard does
support the notion of table and column comments.
Hopefully the ODBC driver is already fetching these
values from pg_description, but if not, it should be
trivial.
Hope this makes the grade,
Mike Mascari
(mascarim@yahoo.com)
mentioned in FROM but not elsewhere in the query: such tables should be
joined over anyway. Aside from being more standards-compliant, this allows
removal of some very ugly hacks for COUNT(*) processing. Also, allow
HAVING clause without aggregate functions, since SQL does. Clean up
CREATE RULE statement-list syntax the same way Bruce just fixed the
main stmtmulti production.
CAUTION: addition of a field to RangeTblEntry nodes breaks stored rules;
you will have to initdb if you have any rules.
expressions in CREATE TABLE. There is no longer an emasculated expression
syntax for these things; it's full a_expr for constraints, and b_expr
for defaults (unfortunately the fact that NOT NULL is a part of the
column constraint syntax causes a shift/reduce conflict if you try a_expr.
Oh well --- at least parenthesized boolean expressions work now). Also,
stored expression for a column default is not pre-coerced to the column
type; we rely on transformInsertStatement to do that when the default is
actually used. This means "f1 datetime default 'now'" behaves the way
people usually expect it to.
BTW, all the support code is now there to implement ALTER TABLE ADD
CONSTRAINT and ALTER TABLE ADD COLUMN with a default value. I didn't
actually teach ALTER TABLE to call it, but it wouldn't be much work.
not just C, so that ISCACHABLE attribute can be specified for user-defined
functions. Get rid of ParamString node type, which wasn't actually being
generated by gram.y anymore, even though define.c thought that was what
it was getting. Clean up minor bug in dfmgr.c (premature heap_close).
Implements the CREATE CONSTRAINT TRIGGER and SET CONSTRAINTS commands.
TODO:
Generic builtin trigger procedures
Automatic execution of appropriate CREATE CONSTRAINT... at CREATE TABLE
Support of new trigger type in pg_dump
Swapping of huge # of events to disk
Jan
functions. One problem that I have encountered with the function
manager is that it does not allow the user to define type conversion
functions that convert between user types. For instance if mytype1,
mytype2, and mytype3 are three Postgresql user types, and if I wish to
define Postgresql conversion functions like
I run into problems, because the Postgresql dynamic loader would look
for a single link symbol, mytype3, for both pieces of object code. If
I just change the name of one of the Postgresql functions (to make the
symbols distinct), the automatic type conversion that Postgresql uses,
for example, when matching operators to arguments no longer finds the
type conversion function.
The solution that I propose, and have implemented in the attatched
patch extends the CREATE FUNCTION syntax as follows. In the first case
above I use the link symbol mytype2_to_mytype3 for the link object
that implements the first conversion function, and define the
Postgresql operator with the following syntax
The patch includes changes to the parser to include the altered
syntax, changes to the ProcedureStmt node in nodes/parsenodes.h,
changes to commands/define.c to handle the extra information in the AS
clause, and changes to utils/fmgr/dfmgr.c that alter the way that the
dynamic loader figures out what link symbol to use. I store the
string for the link symbol in the prosrc text attribute of the pg_proc
table which is currently unused in rows that reference dynamically
loaded
functions.
Bernie Frankpitt
* Buffer refcount cleanup (per my "progress report" to pghackers, 9/22).
* Add links to backend PROC structs to sinval's array of per-backend info,
and use these links for routines that need to check the state of all
backends (rather than the slow, complicated search of the ShmemIndex
hashtable that was used before). Add databaseOID to PROC structs.
* Use this to implement an interlock that prevents DESTROY DATABASE of
a database containing running backends. (It's a little tricky to prevent
a concurrently-starting backend from getting in there, since the new
backend is not able to lock anything at the time it tries to look up
its database in pg_database. My solution is to recheck that the DB is
OK at the end of InitPostgres. It may not be a 100% solution, but it's
a lot better than no interlock at all...)
* In ALTER TABLE RENAME, flush buffers for the relation before doing the
rename of the physical files, to ensure we don't get failures later from
mdblindwrt().
* Update TRUNCATE patch so that it actually compiles against current
sources :-(.
You should do "make clean all" after pulling these changes.
additional argument specifying the kind of lock to acquire/release (or
'NoLock' to do no lock processing). Ensure that all relations are locked
with some appropriate lock level before being examined --- this ensures
that relevant shared-inval messages have been processed and should prevent
problems caused by concurrent VACUUM. Fix several bugs having to do with
mismatched increment/decrement of relation ref count and mismatched
heap_open/close (which amounts to the same thing). A bogus ref count on
a relation doesn't matter much *unless* a SI Inval message happens to
arrive at the wrong time, which is probably why we got away with this
sloppiness for so long. Repair missing grab of AccessExclusiveLock in
DROP TABLE, ALTER/RENAME TABLE, etc, as noted by Hiroshi.
Recommend 'make clean all' after pulling this update; I modified the
Relation struct layout slightly.
Will post further discussion to pghackers list shortly.
This change seems necessary in conjunction with long queries, and it
cleans up some bogosity in connection with long EXPLAIN texts anyway.
Note that current libpq will accept any length error message (at least
until it runs out of memory); prior versions have a limit of 8K, but
will cleanly discard excess error text, so there shouldn't be any
big compatibility problems with old clients.
and fix_opids processing to a single recursive pass over the plan tree
executed at the very tail end of planning, rather than haphazardly here
and there at different places. Now that tlist Vars do not get modified
until the very end, it's possible to get rid of the klugy var_equal and
match_varid partial-matching routines, and just use plain equal()
throughout the optimizer. This is a step towards allowing merge and
hash joins to be done on expressions instead of only Vars ...
> >
> > was implemented by Jan Wieck.
> > His work is for ascending order cases.
> >
> > Here is a patch to prevent sorting also in descending
> > order cases.
> > Because I had already changed _bt_first() to position
> > backward correctly before v6.5,this patch would work.
> >
Hiroshi Inoue
Inoue@tpf.co.jp
neqsel now behave as per my suggestions in pghackers a few days ago.
selectivity for < > <= >= should work OK for integral types as well, but
still need work for nonintegral types. Since these routines have never
actually executed before :-(, this may result in some significant changes
in the optimizer's choices of execution plans. Let me know if you see
any serious misbehavior.
CAUTION: THESE CHANGES REQUIRE INITDB. pg_statistic table has changed.
special hack to ensure it would close its output file even after failure
due to elog(ERROR) partway through the copy. This is now unnecessary
because fd.c takes care of cleaning up open files at transaction abort;
worse, after fd.c closed the file copy.c would try to do so *again* at
the start of the next COPY command. This would result in havoc in most
implementations of stdio library.
/*
* Read above about cases when !ItemIdIsUsed(Citemid)
* (child item is removed)... Due to the fact that
* at the moment we don't remove unuseful part of
* update-chain, it's possible to get too old
* parent row here. Like as in the case which
* caused this problem, we stop shrinking here.
* I could try to find real parent row but want
* not to do it because of real solution will
* be implemented anyway, latter, and we are too
* close to 6.5 release. - vadim 06/11/99
*/
if (Ptp.t_data->t_xmax != tp.t_data->t_xmin)
...
and possibly for other cases too:
DO NOT cache status of transaction in unknown state
(i.e. non-committed and non-aborted ones)
Example:
T1 reads row updated/inserted by running T2 and cache T2 status.
T2 commits.
Now T1 reads a row updated by T2 and with HEAP_XMAX_COMMITTED
in t_infomask (so cached T2 status is not changed).
Now T1 EvalPlanQual gets updated row version without HEAP_XMIN_COMMITTED
-> TransactionIdDidCommit(t_xmin) and TransactionIdDidAbort(t_xmin)
return FALSE and T2 decides that t_xmin is not committed and gets
ERROR above.
It's too late to find more smart way to handle such cases and so
I just changed xact status caching and got rid TransactionIdFlushCache()
from code.
Changed: transam.c, xact.c, lmgr.c and transam.h - last three
just because of TransactionIdFlushCache() is removed.
2. heapam.c:
T1 marked a row for update. T2 waits for T1 commit/abort.
T1 commits. T3 updates the row before T2 locks row page.
Now T2 sees that new row t_xmax is different from xact id (T1)
T2 was waiting for. Old code did Assert here. New one goes to
HeapTupleSatisfiesUpdate. Obvious changes too.
3. Added Assert to vacuum.c
4. bufmgr.c: break
Assert(buf->r_locks == 0 && !buf->ri_lock)
into two Asserts.
2. varsup.c:ReadNewTransactionId(): don't read nextXid from disk -
this func doesn't allocate next xid, so ShmemVariableCache->nextXid
may be used (but GetNewTransactionId() must be called first).
3. vacuum.c: change elog(ERROR, "Child item....") to elog(NOTICE) -
this is not ERROR, proper handling is just not implemented, yet.
4. s_lock.c: increase S_MAX_BUSY by 2 times.
5. shmem.c:GetSnapshotData(): have to call ReadNewTransactionId()
_after_ SpinAcquire(ShmemIndexLock).
2. Get rid of locking when updating statistics in vacuum.
3. Use QuerySnapshot in COPY TO and call SetQuerySnashot
in main tcop loop before FETCH and COPY TO.
been applied. The patches are in the .tar.gz attachment at the end:
varchar-array.patch this patch adds support for arrays of bpchar() and
varchar(), which where always missing from postgres.
These datatypes can be used to replace the _char4,
_char8, etc., which were dropped some time ago.
block-size.patch this patch fixes many errors in the parser and other
program which happen with very large query statements
(> 8K) when using a page size larger than 8192.
This patch is needed if you want to submit queries
larger than 8K. Postgres supports tuples up to 32K
but you can't insert them because you can't submit
queries larger than 8K. My patch fixes this problem.
The patch also replaces all the occurrences of `8192'
and `1<<13' in the sources with the proper constants
defined in include files. You should now never find
8192 hardwired in C code, just to make code clearer.
--
Massimo Dal Zotto
to save a little bit of backend startup time. This way, the first
backend started after a VACUUM will rebuild the init file with up-to-date
statistics for the critical system indexes.
can be generated in a buffer and then sent to the frontend in a single
libpq call. This solves problems with NOTICE and ERROR messages generated
in the middle of a data message or COPY OUT operation.
2. Much faster btree tuples deletion in the case when first on page
index tuple is deleted (no movement to the left page(s)).
3. Remember blkno of new root page in BTPageOpaque of
left/right siblings when root page is splitted.
I've been working on the following TODO list item:
* psql \d on index with char()/varchar() fields shows improper length
I've attached a simple patch to fix this.
-Ryan
Ok. I made patches replacing all of "#if FALSE" or "#if 0" to "#ifdef
NOT_USED" for current. I have tested these patches in that the
postgres binaries are identical.
qualification expression trees in the execution state. Prevents from
memory exhaustion on INSERT, UPDATE or COPY to tables that have CHECK
constraints. Speedup against the variant using freeObject() is more than
factor 2.
Jan
a field was labelled as a primary key, the system automatically
created a unique index on the field. This patch extends it so
that the index has the indisprimary field set. You can pull a list
of primary keys with the followiing select.
SELECT pg_class.relname, pg_attribute.attname
FROM pg_class, pg_attribute, pg_index
WHERE pg_class.oid = pg_attribute.attrelid AND
pg_class.oid = pg_index.indrelid AND
pg_index.indkey[0] = pg_attribute.attnum AND
pg_index.indisunique = 't';
There is nothing in this patch that modifies the template database to
set the indisprimary attribute for system tables. Should they be
changed or should we only be concerned with user tables?
D'Arcy
Here is a first patch to cleanup the backend side of libpq.
This patch removes all external dependencies on the "Pfin" and "Pfout" that
are declared in pqcomm.h. These variables are also changed to "static" to
make sure.
Almost all the change is in the handler of the "copy" command - most other
areas of the backend already used the correct functions.
This change will make the way for cleanup of the internal stuff there - now
that all the functions accessing the file descriptors are confined to a
single directory.
where you state a format and arguments. the old behavior required
each appendStringInfo to have to have a sprintf() before it if any
formatting was required.
Also shortened several instances where there were multiple appendStringInfo()
calls in a row, doing nothing more then adding one more word to the String,
instead of doing them all in one call.
a backend core dump, because it was concatenating a potentially long
string onto another string that didn't necessarily have enough room.
Shame, shame.
Fixes a bug in the rule system that caused a crashing
backend when a join-view with calculated column is used
in subselect.
Modifies EXPLAIN to explain rewritten queries instead of
the plain SeqScan on a view. Rules can produce very deep
MORE
Jan.
Here are two new patches for the Win32 support.
1) The patch based on the one from Hiroshi Inoue [Inoue@tpf.co.jp], to
load
Winsock.dll from libpq.dll.
2) A patch for psql.c to remove the call to WSAStartup(), since it is
not
required when it's done in libpq.dll.
I'm still looking for the possibility of having a crypt() function in
libpq.dll too, the same way getopt was included. Any chance of getting
this
before 6.4, or should we wait for the next one?
//Magnus
I don't know if this is really related to the initdb problem
discussion (haven't followed it enough). But seems so because
it fixes a damn problem during index tuple insertion on
CREATE TABLE into pg_attribute_relid_attnum_index.
Anyway - this bug was really hard to find. During startup the
relcache reads in some prepared information about index
strategies from a file and then reinitializes the function
pointers inside the scanKey data. But for sake it assumed
single attribute index tuples (hasn't that changed recently).
Thus not all the strategies scanKey entries where initialized
properly, resulting in invalid addresses for the btree
comparision functions.
With the patch at the end the regression tests passed
excellent except for the sanity_check that crashed at vacuum
and the misc test where the select unique1 from onek2 outputs
the two rows in different order.
Jan
> sequence.patch
>
> adds the missing setval command to sequences. Owner of sequences
> can now set the last value to any value between min and max
> without recreating the sequence. This is useful after loading
> data from external files.
if MULTIBYTE is not enabled. So be sure to run initdb.
o these patches are made against the latest source tree (after
Bruce's massive patch, I think) BTW, I noticed that after running
regression, the oid field of pg_type seems disappeared.
regression=> select oid from pg_type; ERROR: attribute
'oid' not found
this happens after the constraints test. This occures with/without
my patches. strange...
o pg_database_mb.h, pg_class_mb.h, pg_attribute_mb.h are no longer
used, and shoud be removed.
o GetDatabaseInfo() in utils/misc/database.c removed (actually in
#ifdef 0). seems nobody uses.
t-ishii@sra.co.jp
no longer returns buffer pointer, can be gotten from scan;
descriptor; bootstrap can create multi-key indexes;
pg_procname index now is multi-key index; oidint2, oidint4, oidname
are gone (must be removed from regression tests); use System Cache
rather than sequential scan in many places; heap_modifytuple no
longer takes buffer parameter; remove unused buffer parameter in
a few other functions; oid8 is not index-able; remove some use of
single-character variable names; cleanup Buffer variables usage
and scan descriptor looping; cleaned up allocation and freeing of
tuples; 18k lines of diff;
As Bruce mentioned, this is due to the conflict among changes we made.
Included patches should fix the problem(I changed all MB to
MULTIBYTE). Please let me know if you have further problem.
P.S. I did not include pathces to configure and gram.c to save the
file size(configure.in and gram.y modified).
From: t-ishii@sra.co.jp
Attached are patches to enhance the multi-byte support. (patches are
against 7/18 snapshot)
* determine encoding at initdb/createdb rather than compile time
Now initdb/createdb has an option to specify the encoding. Also, I
modified the syntax of CREATE DATABASE to accept encoding option. See
README.mb for more details.
For this purpose I have added new column "encoding" to pg_database.
Also pg_attribute and pg_class are changed to catch up the
modification to pg_database. Actually I haved added pg_database_mb.h,
pg_attribute_mb.h and pg_class_mb.h. These are used only when MB is
enabled. The reason having separate files is I couldn't find a way to
use ifdef or whatever in those files. I have to admit it looks
ugly. No way.
* support for PGCLIENTENCODING when issuing COPY command
commands/copy.c modified.
* support for SQL92 syntax "SET NAMES"
See gram.y.
* support for LATIN2-5
* add UNICODE regression test case
* new test suite for MB
New directory test/mb added.
* clean up source files
Basic idea is to have MB's own subdirectory for easier maintenance.
These are include/mb and backend/utils/mb.
now. Here some tested features, (examples included in the patch):
1.1) Subselects in the having clause 1.2) Double nested subselects
1.3) Subselects used in the where clause and in the having clause
simultaneously 1.4) Union Selects using having 1.5) Indexes
on the base relations are used correctly 1.6) Unallowed Queries
are prevented (e.g. qualifications in the
having clause that belong to the where clause) 1.7) Insert
into as select
2) Queries using the having clause on view relations also work
but there are some restrictions:
2.1) Create View as Select ... Having ...; using base tables in
the select 2.1.1) The Query rewrite system:
2.1.2) Why are only simple queries allowed against a view from 2.1)
? 2.2) Select ... from testview1, testview2, ... having...; 3) Bug
in ExecMergeJoin ??
Regards Stefan
Making PQrequestCancel safe to call in a signal handler turned out to be
much easier than I feared. So here are the diffs.
Some notes:
* I modified the postmaster's packet "iodone" callback interface to allow
the callback routine to return a continue-or-drop-connection return
code; this was necessary to allow the connection to be closed after
receiving a Cancel, rather than proceeding to launch a new backend...
Being a neatnik, I also made the iodone proc have a typechecked
parameter list.
* I deleted all code I could find that had to do with OOB.
* I made some edits to ensure that all signals mentioned in the code
are referred to symbolically not by numbers ("SIGUSR2" not "2").
I think Bruce may have already done at least some of the same edits;
I hope that merging these patches is not too painful.
As mentioned around line 1153 in backend/commands/copy.c, the method
of array checking is not perfect.
test=> create table t1 (i text);
test=> insert into t1 values('{\\.}');
INSERT 2645600 1
test=> select * from t1;
i
-----
{\\.}
(2 rows)
test=> copy t1 to '/tmp/aaa';
test=> copy t1 from '/tmp/aaa';
ERROR: CopyReadAttribute - end of record marker corrupted
Copy cannot read data produced by itself!
I have implemented a framework of encoding translation between the
backend and the frontend. Also I have added a new variable setting
command:
SET CLIENT_ENCODING TO 'encoding';
Other features include:
Latin1 support more 8 bit cleaness
See doc/README.mb for more details. Note that the pacthes are
against May 30 snapshot.
Tatsuo Ishii
1. Rewritten libpq to allow asynchronous clients.
2. Implemented client side of cancel protocol in library,
and patched psql.c to send a cancel request upon SIGINT. The
backend doesn't notice it yet :-(
3. Implemented 'Z' protocol message addition and renaming of
copy in/out start messages. These are implemented conditionally,
ie, the client protocol version is checked; so the code should
still work with 1.0 clients.
4. Revised protocol and libpq sgml documents (don't have an SGML
compiler, though, so there may be some markup glitches here).
What remains to be done:
1. Implement addition of atttypmod field to RowDescriptor messages.
The client-side code is there but ifdef'd out. I have no idea
what to change on the backend side. The field should be sent
only if protocol >= 2.0, of course.
2. Implement backend response to cancel requests received as OOB
messages. (This prolly need not be conditional on protocol
version; just do it if you get SIGURG.)
3. Update libpq.3. (I'm hoping this can be generated mechanically
from libpq.sgml... if not, will do it by hand.) Is there any
other doco to fix?
4. Update non-libpq interfaces as necessary. I patched libpgtcl
so that it would compile, but haven't tested it. Dunno what
needs to be done with the other interfaces.
Have at it!
Tom Lane
1. Removes the unnecessary "#define AbcRegProcedure 123"'s from
pg_proc.h.
2. Changes those #defines to use the names already defined in
fmgr.h.
3. Forces the make of fmgr.h in backend/Makefile instead of having
it
made as a dependency in access/common/Makefile *hack*hack*hack*
4. Rearranged the #includes to a less helter-skelter arrangement,
also
changing <file.h> to "file.h" to signify a non-system header.
5. Removed "pg_proc.h" from files where its only purpose was for
the
#defines removed in item #1.
6. Added "fmgr.h" to each file changed for completeness sake.
Turns out that #6 was not necessary for some files because fmgr.h
was being included in a roundabout way SIX levels deep by the first
include.
"access/genam.h"
->"access/relscan.h"
->"utils/rel.h"
->"access/strat.h"
->"access/skey.h"
->"fmgr.h"
So adding fmgr.h really didn't add anything to the compile, hopefully
just made it clearer to the programmer.
S Darren.
Attached you'll find a (big) patch that fixes make dep and make
depend in all Makefiles where I found it to be appropriate.
It also removes the dependency in Makefile.global for NAMEDATALEN
and OIDNAMELEN by making backend/catalog/genbki.sh and bin/initdb/initdb.sh
a little smarter.
This no longer requires initdb.sh that is turned into initdb with
a sed script when installing Postgres, hence initdb.sh should be
renamed to initdb (after the patch has been applied :-) )
This patch is against the 6.3 sources, as it took a while to
complete.
Please review and apply,
Cheers,
Jeroen van Vianen
1. Remove the char2, char4, char8 and char16 types from postgresql
2. Change references of char16 to name in the regression tests.
3. Rename the char16.sql regression test to name.sql. 4. Modify
the regression test scripts and outputs to match up.
Might require new regression.{SYSTEM} files...
Darren King
access overrun. For the sake of doing things properly here is a
patch which fixes it.
This patch is for the file backend/commands/sequence.c.
Maurice Gittens
yyerror ones from bison. It also includes a few 'enhancements' to
the C programming style (which are, of course, personal).
The other patch removes the compilation of backend/lib/qsort.c, as
qsort() is a standard function in stdlib.h and can be used any
where else (and it is). It was only used in
backend/optimizer/geqo/geqo_pool.c, backend/optimizer/path/predmig.c,
and backend/storage/page/bufpage.c
> > Some or all of these changes might not be appropriate for v6.3,
since we > > are in beta testing and since they do not affect the
current functionality. > > For those cases, how about submitting
patches based on the final v6.3 > > release?
There's more to come. Please review these patches. I ran the
regression tests and they only failed where this was expected
(random, geo, etc).
Cheers,
Jeroen
seems that my last post didn't make it through. That's good
since the diff itself didn't covered the renaming of
pg_user.h to pg_shadow.h and it's new content.
Here it's again. The complete regression test passwd with
only some float diffs. createuser and destroyuser work.
pg_shadow cannot be read by ordinary user.
Someone changed the parser to build a TypeName node on CREATE
FUNCTION in any case. As a side effect, ALL! functions
created got the proretset attribute to true. Thus for a
SELECT the parser wrapped an Iter node around the Expr and
since singleton functions set isDone the Iter returns no
tuple up.
varchar length.
Cleans up code so attlen is always length.
Removed varchar() hack added earlier.
Will fix bug in selecting varchar() fields, and varchar() can be
variable length.
Patch by: wieck@sapserv.debis.de (Jan Wieck)
One of the design rules of PostgreSQL is extensibility. And
to follow this rule means (at least for me) that there should
not only be a builtin PL. Instead I would prefer a defined
interface for PL implemetations.
o A new patch that contains the following changes:
-- The pg_pwd file is now cached in the postmaster's memory.
-- pg_pwd is reloaded when the postmaster detects a flag file creat()'ed
by a backend.
-- qsort() is used to sort loaded password entries, and bsearch() is
is used to find entries in the pg_pwd cache.
-- backends now copy the pg_user relation to pg_pwd.pid, and then
rename the temp file to be pg_pwd.
-- The delimiter for pg_pwd has been changed to a tab character.
Makefile.global.
End result, if all goes well, should allow for much easier porting, since
there will no longer be a concept of a "port". Most, if not everything,
*should* be determined by configure, or by the compiler itself. Still
work to be done though :)
async.c: #include <port-protos.h> surrounded by an #ifdef HAVE_STRDUP
vacuum.c: #include <port-protos.h> commented out...can someone comment as
to why it was included, as it doesn't seem to have any effect
under FreeBSD so far...would like some sort of #ifdef wrapper
like async.c if possible
start time equal to tuple->t_tmax.
Privent shrinking if there are tuples modifyed by running transactions
(it concerns system relations only, currently).
table. The table name is de-allocated by the CommitTransactionCommand()
in vc_init() before it is copied in VacRel.data and sometimes this causes
a SIGSEGV. My patch simply moves the strcpy before vc_init.
Submitted by Massimo Dal Zotto <dz@cs.unitn.it>.
Subject: [HACKERS] better access control error messages
This patch replaces the 'no such class or insufficient privilege' with
distinct error messages that tell you whether the table really doesn't
exist or whether access was denied.
as ints and longs. Touches on quite a few function args as
well. Most other files look ok as far as Oids go...still checking
though...
Since Oids are type'd as unsigned ints, they should prolly be used
with the %ud format string in elog and sprintf messages. Not sure
what kind of strangeness that could produce.
Darren King
Changes:
* Unique index capability works using the syntax 'create unique
index'.
* Duplicate OID's in the system tables are removed. I put
little scripts called 'duplicate_oids' and 'find_oid' in
include/catalog that help to find and remove duplicate OID's.
I also moved 'unused_oids' from backend/catalog to
include/catalog, since it has to be in the same directory
as the include files in order to work.
* The backend tries converting the name of a function or aggregate
to all lowercase if the original name given doesn't work (mostly
for compatibility with ODBC).
* You can 'SELECT NULL' to your heart's content.
* I put my _bt_updateitem fix in instead, which uses
_bt_insertonpg so that even if the new key is so big that
the page has to be split, everything still works.
* All literal references to system catalog OID's have been
replaced with references to define'd constants from the catalog
header files.
* I added a couple of node copy functions. I think this was a
preliminary attempt to get rules to work.
2. Reap unused tuples too
3. Reap empty pages
4. Check if a page is initialized, initialize it if not
and reap it
5. Binary search in list of reapped pages/tids to check
is the heap' tid pointed by a index' tuple on this list
(it's mu-u-uch faster)
It adds a WITH OIDS option to the copy command, which allows
dumping and loading of oids.
If a copy command tried to load in an oid that is greater than
its current system max oid, the system max oid is incremented. No
checking is done to see if other backends are running and have cached
oids.
pg_dump as its first step when using the -o (oid) option, will
copy in a dummy row to set the system max oid value so as rows are
loaded in, they are certain to be lower than the system oid.
pg_dump now creates indexes at the end to speed loading
Submitted by: Bruce Momjian <maillist@candle.pha.pa.us>
CLUSTER command couldn't rename correctly the new created heap relation.
The table base name resulted in some "temp_XXXX" instead of the correct
base name.
Submitted by: Dirk Koeser <koeser@informatik.uni-rostock.de>
pg_dump and load to 2.0. I haven't gotten any feedback on whether
people want it, so I am submitting it for others to decide. I would
recommend an install in 1.02.1.
I had said that the 2.0 pg_dump could dump a 1.02.1 database, but I was
wrong. The copy is actually performed by the backend, and the 2.0
database will not be able to read 1.02.1 databases because of the new
system columns.
This patch does several things. It copies nulls out as \N, so they can
be distinguished from '' strings. It fixes a problem where backslashes
in the input stream were not output as double-backslashes. Without this
patch, backslashes copied out were deleted upon input, or interpreted as
special characters. Third, input is now terminated by backslash-period.
This can not be part of a normal input stream.
I tested this by creating a database with all sorts of nulls, backslash,
and period fields and dumped the database and reloaded into a new
database and compared them.
Submitted by: Bruce