Make all system indexes unique.
Make all cache loads use system indexes.
Rename *rel to *relid in inheritance tables.
Rename cache names to be clearer.
eliminating some wildly inconsistent coding in various parts of the
system. I set MAXPGPATH = 1024 in config.h.in. If anyone is really
convinced that there ought to be a configure-time test to set the
value, go right ahead ... but I think it's a waste of time.
when an initdb-forcing change has been applied within a development cycle.
PG_VERSION serves this purpose for official releases, but we can't bump
the PG_VERSION number every time we make a change to the catalogs during
development. Instead, increase the catalog version number to warn other
developers that you've made an incompatible change. See my mail to
pghackers for more info.
a generalized module 'tuplesort.c' that can sort either HeapTuples or
IndexTuples, and is not tied to execution of a Sort node. Clean up
memory leakages in sorting, and replace nbtsort.c's private implementation
of mergesorting with calls to tuplesort.c.
expressions in CREATE TABLE. There is no longer an emasculated expression
syntax for these things; it's full a_expr for constraints, and b_expr
for defaults (unfortunately the fact that NOT NULL is a part of the
column constraint syntax causes a shift/reduce conflict if you try a_expr.
Oh well --- at least parenthesized boolean expressions work now). Also,
stored expression for a column default is not pre-coerced to the column
type; we rely on transformInsertStatement to do that when the default is
actually used. This means "f1 datetime default 'now'" behaves the way
people usually expect it to.
BTW, all the support code is now there to implement ALTER TABLE ADD
CONSTRAINT and ALTER TABLE ADD COLUMN with a default value. I didn't
actually teach ALTER TABLE to call it, but it wouldn't be much work.
Implements the CREATE CONSTRAINT TRIGGER and SET CONSTRAINTS commands.
TODO:
Generic builtin trigger procedures
Automatic execution of appropriate CREATE CONSTRAINT... at CREATE TABLE
Support of new trigger type in pg_dump
Swapping of huge # of events to disk
Jan
is used to find start scan position of Indexscan-s.
To speed up finding scan start position,I have changed
_bt_first() to use as many keys as possible.
I'll attach the patch here.
Regards.
Hiroshi Inoue
* Buffer refcount cleanup (per my "progress report" to pghackers, 9/22).
* Add links to backend PROC structs to sinval's array of per-backend info,
and use these links for routines that need to check the state of all
backends (rather than the slow, complicated search of the ShmemIndex
hashtable that was used before). Add databaseOID to PROC structs.
* Use this to implement an interlock that prevents DESTROY DATABASE of
a database containing running backends. (It's a little tricky to prevent
a concurrently-starting backend from getting in there, since the new
backend is not able to lock anything at the time it tries to look up
its database in pg_database. My solution is to recheck that the DB is
OK at the end of InitPostgres. It may not be a 100% solution, but it's
a lot better than no interlock at all...)
* In ALTER TABLE RENAME, flush buffers for the relation before doing the
rename of the physical files, to ensure we don't get failures later from
mdblindwrt().
* Update TRUNCATE patch so that it actually compiles against current
sources :-(.
You should do "make clean all" after pulling these changes.
additional argument specifying the kind of lock to acquire/release (or
'NoLock' to do no lock processing). Ensure that all relations are locked
with some appropriate lock level before being examined --- this ensures
that relevant shared-inval messages have been processed and should prevent
problems caused by concurrent VACUUM. Fix several bugs having to do with
mismatched increment/decrement of relation ref count and mismatched
heap_open/close (which amounts to the same thing). A bogus ref count on
a relation doesn't matter much *unless* a SI Inval message happens to
arrive at the wrong time, which is probably why we got away with this
sloppiness for so long. Repair missing grab of AccessExclusiveLock in
DROP TABLE, ALTER/RENAME TABLE, etc, as noted by Hiroshi.
Recommend 'make clean all' after pulling this update; I modified the
Relation struct layout slightly.
Will post further discussion to pghackers list shortly.
See attached mail for more details.
-------------------------------------------------------------------
From: "Vadim Mikheev" <vadim@krs.ru>
To: "Hiroshi Inoue" <Inoue@tpf.co.jp>
References: <000201befa94$42fe04c0$2801007e@cadzone.tpf.co.jp>
Subject: Re: elog(ERROR) in vacuum
Date: Fri, 10 Sep 1999 10:27:10 +0900
Organization: OJSC Rostelecom (Krasnoyarsk)
Message-ID: <37D85E6E.5AFA126D@krs.ru>
Hiroshi Inoue wrote:
>
> Hello Vadim,
>
> I have a question about vacuum.
>
> VACUUM has a phase like commit which calls TransactionIdCommit().
> But if elog(ERROR) occured after that,the status of transaction is
> changed from XID_COMMIT to XID_ABORT.
>
> Seems to me this causes inconsistency.
> Shoudn't AbortTransaction() be changed not to call TransacionIdAbort()
> in case of vacuum.
You're right!
As usual -:)
Vadim
transaction abort --- before it only worked if there was exactly one level
of allocation context stacked in the blank portal. Now it does the right
thing for any depth, including zero...
has positive refcount, it is rebuilt from pg_class data. This ensures
that relcache entries will track changes made by other backends. Formerly,
a shared inval report would just be ignored if it happened to arrive while
the relcache entry was in use. Also, fix relcache to reset ref counts
to zero during transaction abort. Finally, change LockRelation() so that
it checks for shared inval reports after obtaining the lock. In this way,
once any kind of lock has been obtained on a rel, we can trust the relcache
entry to be up-to-date.
Also, move responsibility for calling vc_abort into main xact.c list of
things-to-call-at-abort. What in the world was it doing down inside of
TransactionIdAbort()?
care of equal-key cases, eliminating bt_firsteq(). The linear search
formerly done by bt_firsteq() took a lot of time in the case where many
equal keys appear on the same page.
this one could be useful for people experiencing out-of-memory crashes while
executing queries which retrieve or use a very large number of tuples.
The problem happens when storage is allocated for functions results used in
a large query, for example:
select upper(name) from big_table;
select big_table.array[1] from big_table;
select count(upper(name)) from big_table;
This patch is a dirty hack that fixes the out-of-memory problem for the most
common cases, like the above ones. It is not the final solution for the
problem but it can work for some people, so I'm posting it.
The patch should be safe because all changes are under #ifdef. Furthermore
the feature can be enabled or disabled at runtime by the `free_tuple_memory'
options in the pg_options file. The option is disabled by default and must
be explicitly enabled at runtime to have any effect.
To enable the patch add the follwing line to Makefile.custom:
CUSTOM_COPT += -DFREE_TUPLE_MEMORY
To enable the option at runtime add the following line to pg_option:
free_tuple_memory=1
Massimo
and possibly for other cases too:
DO NOT cache status of transaction in unknown state
(i.e. non-committed and non-aborted ones)
Example:
T1 reads row updated/inserted by running T2 and cache T2 status.
T2 commits.
Now T1 reads a row updated by T2 and with HEAP_XMAX_COMMITTED
in t_infomask (so cached T2 status is not changed).
Now T1 EvalPlanQual gets updated row version without HEAP_XMIN_COMMITTED
-> TransactionIdDidCommit(t_xmin) and TransactionIdDidAbort(t_xmin)
return FALSE and T2 decides that t_xmin is not committed and gets
ERROR above.
It's too late to find more smart way to handle such cases and so
I just changed xact status caching and got rid TransactionIdFlushCache()
from code.
Changed: transam.c, xact.c, lmgr.c and transam.h - last three
just because of TransactionIdFlushCache() is removed.
2. heapam.c:
T1 marked a row for update. T2 waits for T1 commit/abort.
T1 commits. T3 updates the row before T2 locks row page.
Now T2 sees that new row t_xmax is different from xact id (T1)
T2 was waiting for. Old code did Assert here. New one goes to
HeapTupleSatisfiesUpdate. Obvious changes too.
3. Added Assert to vacuum.c
4. bufmgr.c: break
Assert(buf->r_locks == 0 && !buf->ri_lock)
into two Asserts.
2. varsup.c:ReadNewTransactionId(): don't read nextXid from disk -
this func doesn't allocate next xid, so ShmemVariableCache->nextXid
may be used (but GetNewTransactionId() must be called first).
3. vacuum.c: change elog(ERROR, "Child item....") to elog(NOTICE) -
this is not ERROR, proper handling is just not implemented, yet.
4. s_lock.c: increase S_MAX_BUSY by 2 times.
5. shmem.c:GetSnapshotData(): have to call ReadNewTransactionId()
_after_ SpinAcquire(ShmemIndexLock).
transactions will not assume that MyProc transaction was committed
before snapshot calculations. With old MyProc->xid assignment
(in xact.c:StartTransaction()) there was ability to see the same
row twice (I used gdb for this)!...
2. Assignments of InvalidTransactionId to MyProc->xid and MyProc->xmin
are moved from xact.c:CommitTransaction() to
xact.c:RecordTransactionCommit() - this invalidation must be done
before releasing transaction locks or bad (too high) XmaxRecent value
might be used by vacuum ("ERROR: Child itemid marked as unused"
reported by "Hiroshi Inoue" <Inoue@tpf.co.jp>; once again, gdb
allowed me reproduce this error).
files to be closed automatically at transaction abort or commit, should
they still be open. Also close any still-open stdio files allocated with
AllocateFile at abort/commit. This should eliminate problems with leakage
of file descriptors after an error. Also, put in some primitive buffered-IO
support so that psort.c can use virtual files without severe performance
penalties.
can be generated in a buffer and then sent to the frontend in a single
libpq call. This solves problems with NOTICE and ERROR messages generated
in the middle of a data message or COPY OUT operation.
indexes.
1. Index Scan using plural indexids never scan backward
as to the order of indexids.
2. The cursor using Index scan is not usable after moving
past the end.
This patch solves above bugs.
Moreover the change of _bt_first() would be useful to extend
ORDER BY patch by Jan Wieck for all descending order cases.
Hiroshi Inoue
2. Much faster btree tuples deletion in the case when first on page
index tuple is deleted (no movement to the left page(s)).
3. Remember blkno of new root page in BTPageOpaque of
left/right siblings when root page is splitted.
I would like some feedback on what the hash function for the int8 hash
function
in the ./backend/access/hash/hashfunc.c should return.
Also, could someone (maybe Tomas Lockhart?) look-over the patch and make
sure
the system table entries are correct? I've tried to research them as
much as I
could, but some of them are still not clear to me.
Thanks,
-Ryan
Ok. I made patches replacing all of "#if FALSE" or "#if 0" to "#ifdef
NOT_USED" for current. I have tested these patches in that the
postgres binaries are identical.
so that fetching an attribute value needs only one SearchSysCacheTuple call
instead of two redundant searches. This speeds up a large SELECT by about
ten percent, and probably will help GROUP BY and SELECT DISTINCT too.
for against a just updated CVS tree. It contains
Partial new rewrite system that handles subselects, view
aggregate columns, insert into select from view, updates
with set col = view-value and select rules restriction to
view definition.
Updates for rule/view backparsing utility functions to
handle subselects correct.
New system views pg_tables and pg_indexes (where you can
see the complete index definition in the latter one).
Enabling array references on query parameters.
Bugfix for functional index.
Little changes to system views pg_rules and pg_views.
The rule system isn't a release-stopper any longer.
But another stopper is that I don't know if the latest
changes to PL/pgSQL (not already in CVS) made it compile on
AIX. Still wait for some response from Dave.
Jan
After some playing with gdb I found that in printtup() there is a non null
attribute with typeinfo->attrs[i]->atttypid = 0 (invalid oid). Unfortunately
attibutes with invalid type are neither printed nor marked as null, and this
explains why psql doesn't get all the expected data.
So I made this patch to printtup():
> tprintf.patch
>
> tprintf.patch
>
> adds functions and macros which implement a conditional trace package
> with the ability to change flags and numeric options of running
> backends at runtime.
> Options/flags can be specified in the command line and/or read from
> the file pg_options in the data directory.
no longer returns buffer pointer, can be gotten from scan;
descriptor; bootstrap can create multi-key indexes;
pg_procname index now is multi-key index; oidint2, oidint4, oidname
are gone (must be removed from regression tests); use System Cache
rather than sequential scan in many places; heap_modifytuple no
longer takes buffer parameter; remove unused buffer parameter in
a few other functions; oid8 is not index-able; remove some use of
single-character variable names; cleanup Buffer variables usage
and scan descriptor looping; cleaned up allocation and freeing of
tuples; 18k lines of diff;
As Bruce mentioned, this is due to the conflict among changes we made.
Included patches should fix the problem(I changed all MB to
MULTIBYTE). Please let me know if you have further problem.
P.S. I did not include pathces to configure and gram.c to save the
file size(configure.in and gram.y modified).
calls. Outside a transaction, the backend detects them as buffer
leaks; it sends a NOTICE, and frees them. This sometimes cause a
segmentation fault (at least on Linux). These indexes are initialized
on the first lo_read/lo_write/lo_tell call, and (normally) closed
on a lo_close call. Thus the buffer leaks appear when lo direct
access functions are used, and not with lo_import/lo_export functions
(libpq version calls lo_close before ending the command, and the
backend version uses another path).
The included patches (against recent snapshot, and against 6.3.2)
cause indexes to be closed on transaction end (that is on explicit
'END' statment, or on command termination outside trasaction blocks),
thus preventing the buffer leaks while increasing performance inside
transactions. Some (all?) 'classic' memory leaks are also removed.
I hope it will be ok.
--- Pascal ANDRE, graduated from Ecole Centrale Paris andre@via.ecp.fr
I have implemented a framework of encoding translation between the
backend and the frontend. Also I have added a new variable setting
command:
SET CLIENT_ENCODING TO 'encoding';
Other features include:
Latin1 support more 8 bit cleaness
See doc/README.mb for more details. Note that the pacthes are
against May 30 snapshot.
Tatsuo Ishii
1. Removes the unnecessary "#define AbcRegProcedure 123"'s from
pg_proc.h.
2. Changes those #defines to use the names already defined in
fmgr.h.
3. Forces the make of fmgr.h in backend/Makefile instead of having
it
made as a dependency in access/common/Makefile *hack*hack*hack*
4. Rearranged the #includes to a less helter-skelter arrangement,
also
changing <file.h> to "file.h" to signify a non-system header.
5. Removed "pg_proc.h" from files where its only purpose was for
the
#defines removed in item #1.
6. Added "fmgr.h" to each file changed for completeness sake.
Turns out that #6 was not necessary for some files because fmgr.h
was being included in a roundabout way SIX levels deep by the first
include.
"access/genam.h"
->"access/relscan.h"
->"utils/rel.h"
->"access/strat.h"
->"access/skey.h"
->"fmgr.h"
So adding fmgr.h really didn't add anything to the compile, hopefully
just made it clearer to the programmer.
S Darren.
Attached you'll find a (big) patch that fixes make dep and make
depend in all Makefiles where I found it to be appropriate.
It also removes the dependency in Makefile.global for NAMEDATALEN
and OIDNAMELEN by making backend/catalog/genbki.sh and bin/initdb/initdb.sh
a little smarter.
This no longer requires initdb.sh that is turned into initdb with
a sed script when installing Postgres, hence initdb.sh should be
renamed to initdb (after the patch has been applied :-) )
This patch is against the 6.3 sources, as it took a while to
complete.
Please review and apply,
Cheers,
Jeroen van Vianen
1. Remove the char2, char4, char8 and char16 types from postgresql
2. Change references of char16 to name in the regression tests.
3. Rename the char16.sql regression test to name.sql. 4. Modify
the regression test scripts and outputs to match up.
Might require new regression.{SYSTEM} files...
Darren King
#define TAPETEMP "pg_btsortXXXXXX"
to:
#define TAPETEMP "pg_btsortXXXXXXX"
For some reason, under FreeBSD, it appears that the mktemp() value needs the
extra 'X' to improve/ensure uniqueness
select from a table with attrs (a int, b char(20))
crashed in bpcharout() (palloc of -1 bytes). But a table
with attrs (a int, b varchar(20)) worked.
From: Jan Wieck <jwieck@debis.com>
varchar length.
Cleans up code so attlen is always length.
Removed varchar() hack added earlier.
Will fix bug in selecting varchar() fields, and varchar() can be
variable length.
Patch by: wieck@sapserv.debis.de (Jan Wieck)
One of the design rules of PostgreSQL is extensibility. And
to follow this rule means (at least for me) that there should
not only be a builtin PL. Instead I would prefer a defined
interface for PL implemetations.
==========================================
What follows is a set of diffs that cleans up the usage of BLCKSZ.
As a side effect, the person compiling the code can change the
value of BLCKSZ _at_their_own_risk_. By that, I mean that I've
tried it here at 4096 and 16384 with no ill-effects. A value
of 4096 _shouldn't_ affect much as far as the kernel/file system
goes, but making it bigger than 8192 can have severe consequences
if you don't know what you're doing. 16394 worked for me, _BUT_
when I went to 32768 and did an initdb, the SCSI driver broke and
the partition that I was running under went to hell in a hand
basket. Had to reboot and do a good bit of fsck'ing to fix things up.
The patch can be safely applied though. Just leave BLCKSZ = 8192
and everything is as before. It basically only cleans up all of the
references to BLCKSZ in the code.
If this patch is applied, a comment in the config.h file though above
the BLCKSZ define with warning about monkeying around with it would
be a good idea.
Darren darrenk@insightdist.com
(Also cleans up some of the #includes in files referencing BLCKSZ.)
==========================================
Essentially, this cleans things up so that if PORTNAME isn't defined (I'm
working on getting rid of it for FreeBSD, at least, to see if its possible)
none of the PORTNAME related stuff gets passed around.
Had a little bit of -I related redundancy as well
Subject: [PORTS] Patches for Irix 6.4
I have worked out how to compile PostgreSQL on Irix 6.4 using the -n32 compiler
mode and version 7.1 of the C compiler. (The n32 compiler use 32 bits
addressing,
but allows access to all the instructions in the MIPS4 instruction set.)
There were several problems:
1) The ld command is not referenced as a macro in all the Makefiles. On
this platform, you have to include -n32 on all the ld commands. Makefiles
were changed as needed.
3) Lots of warnings are generated from the compiler. Since the regression
tests worked OK, I didn't attempt to fix them. If anyone wants the compilation
log, please let me know, and I'll email it to you.
The version of postgresql was 970602. Here is Makefile.custom:
CUSTOM_COPT = -O2 -n32
MK_NO_LORDER = 1
LD = ld -n32
CC += -n32
Subject: [PATCHES] Re: [PORTS] AIX 6.1 fixes...
Here are the patches for the two things that wouldn't make it thru the AIX
compiler. The geo_ops.c change is harmless I believe. The nbtcompare.c patch
fixes me, but I don't know about any other ports. Maybe wait on that one
until Vadim decides what to do about the unsigned vs signed chars varlena
issue.
when btree used in innerscan with run-time key which value
passed by pointer.
Fix: keys ordering stuff moved to _bt_first().
Pointed by Thomas Lockhart.
OK, here are a passel of patches for the geometric data types.
These add a "circle" data type, new operators and functions
for the existing data types, and change the default formats
for some of the existing types to make them consistant with
each other. Current formatting conventions (e.g. compatible
with v6.0 to allow dump/reload) are supported, but the new
conventions should be an improvement and we can eventually
drop the old conventions entirely.
For example, there are two kinds of paths (connected line segments),
open and closed, and the old format was
'(1,2,1,2,3,4)' for a closed path with two points (1,2) and (3,4)
'(0,2,1,2,3,4)' for an open path with two points (1,2) and (3,4)
Pretty arcane, huh? The new format for paths is
'((1,2),(3,4))' for a closed path with two points (1,2) and (3,4)
'[(1,2),(3,4)]' for an open path with two points (1,2) and (3,4)
For polygons, the old convention is
'(0,4,2,0,4,3)' for a triangle with points at (0,0),(4,4), and (2,3)
and the new convention is
'((0,0),(4,4),(2,3))' for a triangle with points at (0,0),(4,4), and (2,3)
Other data types which are also represented as lists of points
(e.g. boxes, line segments, and polygons) have similar representations
(they surround each point with parens).
For v6.1, any format which can be interpreted as the old style format
is decoded as such; we can remove that backwards compatibility but ugly
convention for v7.0. This will allow dump/reloads from v6.0.
These include some updates to the regression test files to change the test
for creating a data type from "circle" to "widget" to keep the test from
trashing the new builtin circle type.
index tuple (logical position within A LEVEL). bti_oid & bti_dummy
taken off from BTItemData.
2. Fix for multi-column indices (nbtsearch.c):
_bt_binsrch() - for searches on internal pages having keysize <
number of attrs we point at the last item < the scankey, not at the
first item = the scankey;
_bt_moveright() - if keysize < number of attrs we compare scankey with
_last_ item on current page to decide should we move right or
not.
Reply-To: hackers@hub.org, Dan McGuirk <mcguirk@indirect.com>
To: hackers@hub.org
Subject: [HACKERS] tmin writeback optimization
I was doing some profiling of the backend, and noticed that during a certain
benchmark I was running somewhere between 30% and 75% of the backend's CPU
time was being spent in calls to TransactionIdDidCommit() from
HeapTupleSatisfiesNow() or HeapTupleSatisfiesItself() to determine that
changed rows' transactions had in fact been committed even though the rows'
tmin values had not yet been set.
When a query looks at a given row, it needs to figure out whether the
transaction that changed the row has been committed and hence it should pay
attention to the row, or whether on the other hand the transaction is still
in progress or has been aborted and hence the row should be ignored. If
a tmin value is set, it is known definitively that the row's transaction
has been committed. However, if tmin is not set, the transaction
referred to in xmin must be looked up in pg_log, and this is what the
backend was spending a lot of time doing during my benchmark.
So, implementing a method suggested by Vadim, I created the following
patch that, the first time a query finds a committed row whose tmin value
is not set, sets it, and marks the buffer where the row is stored as
dirty. (It works for tmax, too.) This doesn't result in the boost in
real time performance I was hoping for, however it does decrease backend
CPU usage by up to two-thirds in certain situations, so it could be
rather beneficial in high-concurrency settings.
Actually required by multi-column indices support.
We still don't use btree for 'A is (not) null', but
now btree keep items with NULL attrs using single rule
for placing/finding items on pages:
NULLs greater NOT_NULLs and NULL = NULL.
+ Bulkload code (nbtsort.c) support for multi-column indices
building and NULLs.
+ Fix for btendscan()->pfree(scanopaque) from Chris Dunlop.
Subject: [HACKERS] linux/alpha patches
These patches lay the groundwork for a Linux/Alpha port. The port doesn't
actually work unless you tweak the linker to put all the pointers in the
first 32 bits of the address space, but it's at least a start. It
implements the test-and-set instruction in Alpha assembly, and also fixes
a lot of pointer-to-integer conversions, which is probably good anyway.
Subject: [HACKERS] abort failed transaction patch
This patch allows you to end a transaction that has failed on an error
using the 'ABORT' statement without generating another error message.
(By default you get an error unless you use 'END' to terminate the
transaction, which has already been aborted anyway.)
Patches from: aoki@CS.Berkeley.EDU (Paul M. Aoki)
i gave jolly my btree bulkload code a long, long time ago but never
gave him a bunch of my bugfixes. here's a diff against the 6.0
baseline.
for some reason, this code has slowed down somewhat relative to the
insertion-build code on very small tables. don't know why -- it used
to be within about 10%. anyway, here are some (highly unscientific!)
timings on a dec 3000/300 for synthetic tables with 10k, 100k and
1000k tuples (basically, 1mb, 10mb and 100mb heaps). 'c' means
clustered (pre-sorted) inputs and 'u' means unclustered (randomly
ordered) inputs. the 10k table basically fits in the buffer pool, but
the 100k and 1000k tables don't. as you can see, insertion build is
fine if you've sorted your heaps on your index key or if your heap
fits in core, but is absolutely horrible on unordered data (yes,
that's 7.5 hours to index 100mb of data...) because of the zillions of
random i/os.
if it doesn't work for you for whatever reason, you can always turn it
back off by flipping the FastBuild flag in nbtree.c. i don't have
time to maintain it.
good luck!
baseline code:
time psql -c 'create index c10 on k10 using btree (c int4_ops)' bttest
real 8.6
time psql -c 'create index u10 on k10 using btree (b int4_ops)' bttest
real 9.1
time psql -c 'create index c100 on k100 using btree (c int4_ops)' bttest
real 59.2
time psql -c 'create index u100 on k100 using btree (b int4_ops)' bttest
real 652.4
time psql -c 'create index c1000 on k1000 using btree (c int4_ops)' bttest
real 636.1
time psql -c 'create index u1000 on k1000 using btree (b int4_ops)' bttest
real 26772.9
bulkloading code:
time psql -c 'create index c10 on k10 using btree (c int4_ops)' bttest
real 11.3
time psql -c 'create index u10 on k10 using btree (b int4_ops)' bttest
real 10.4
time psql -c 'create index c100 on k100 using btree (c int4_ops)' bttest
real 59.5
time psql -c 'create index u100 on k100 using btree (b int4_ops)' bttest
real 63.5
time psql -c 'create index c1000 on k1000 using btree (c int4_ops)' bttest
real 636.9
time psql -c 'create index u1000 on k1000 using btree (b int4_ops)' bttest
real 701.0
As an example I sent a bug-report on 26 Nov to tell that the fix included
below is necessary to compile pg95-current on Ultrix with Digital's
standard C compiler c89. In fact I think that this fix is needed
for any C compiler sticking very close the standard, see my discussion
in the original bug report.
Erik Bertelsen
cc1: warnings being treated as errors
transsup.c: In function `TransBlockGetLastTransactionIdStatus':
transsup.c:122: warning: unsigned value >= 0 is always 1
gmake[3]: *** [transsup.o] Error 1
...
(old _bt_compare always returned >= 0 while comparing with P_HIKEY
on root page - it breaks root page when _bt_insertonpg tries insert
new minimal key into root page).
2. Fixed bug concerns "empty" pages: non-rightmost pages with only P_HIKEY
present on it. Such pages appear after vacuum.
Changes:
* Unique index capability works using the syntax 'create unique
index'.
* Duplicate OID's in the system tables are removed. I put
little scripts called 'duplicate_oids' and 'find_oid' in
include/catalog that help to find and remove duplicate OID's.
I also moved 'unused_oids' from backend/catalog to
include/catalog, since it has to be in the same directory
as the include files in order to work.
* The backend tries converting the name of a function or aggregate
to all lowercase if the original name given doesn't work (mostly
for compatibility with ODBC).
* You can 'SELECT NULL' to your heart's content.
* I put my _bt_updateitem fix in instead, which uses
_bt_insertonpg so that even if the new key is so big that
the page has to be split, everything still works.
* All literal references to system catalog OID's have been
replaced with references to define'd constants from the catalog
header files.
* I added a couple of node copy functions. I think this was a
preliminary attempt to get rules to work.
I found another bug in btree index. Looking at the code it seems that NULL
keys are never used to build or scan a btree index (see the explain commands
in the example). However this is not the case when a null key is retrieved
in an outer loop of a join select and used in an index scan of an inner loop.
This bug causes at least three kinds of problems:
1) the backend crashes when it tries to compare a text string with a null.
2) it is not possible to find tuples with null keys in a join.
3) null is considered equal to 0 when the datum is passed by value, see
the last query.
Submitted by: Massimo Dal Zotto <dz@cs.unitn.it>
My guess is that the thing had bugs, and the pfree was commented out.
The thing is probabally free'ed anyway at the end, so it was not a bad
thing.
If it does cause a bug, it will generate an error when hit, so I say
unless someone else knows, let's remove it and run the regression test.
-Bruce
- Added the header access/heapam.h.
- Changed all instances of "length" to "data_length" to quiet
the compiler.
- initialized a few variables. The compiler couldn't see that
the code guaranteed that these would be initialized before
being dereferenced. If anyone wants to check my work follow
the usage of these variables and make sure that this true
and wasn't actually a bug in the original code.
- added a missing break statement to a default case. This
was a benign error but bad style.
- layed out heap_sysattrlen differently. I think this way
makes the structure of the code crystal clear. There should
be no actual difference in the actual behaviour of the code.
Submitted by: darcy@druid.druid.com (D'Arcy J.M. Cain)
It adds a WITH OIDS option to the copy command, which allows
dumping and loading of oids.
If a copy command tried to load in an oid that is greater than
its current system max oid, the system max oid is incremented. No
checking is done to see if other backends are running and have cached
oids.
pg_dump as its first step when using the -o (oid) option, will
copy in a dummy row to set the system max oid value so as rows are
loaded in, they are certain to be lower than the system oid.
pg_dump now creates indexes at the end to speed loading
Submitted by: Bruce Momjian <maillist@candle.pha.pa.us>
When you try to do any UPDATE of the catalog class pg_class, such as
to change ownership of a class, the backend crashes.
This is really two serial bugs: 1) there is a hardcoded copy of the
schema of pg_class in the postgres program, and it doesn't match the
actual class that initdb creates in the database; 2) Parts of postgres
determine whether to pass an attribute value by value or by reference
based on the attbyval attribute of the attribute in class
pg_attribute. Other parts of postgres have it hardcoded. For the
relacl[] attribute in class pg_class, attbyval does not match the
hardcoded expectation.
The fix is to correct the hardcoded schema for pg_attribute and to
change the fetchatt macro so it ignores attbyval for all variable
length attributes. The fix also adds a bunch of logic documentation and
extends genbki.sh so it allows source files to contain such documentation.
--
Bryan Henderson Phone 408-227-6803
San Jose, California
Postgres is not able to cluster a relation on which an rtree index is
defined. Postmaster gives the following error message:
Too Large Allocation Request("!(0 < (size) && (size) <= (0xfffffff)):size=0
[0x0]", File:"/export/home/postgres/src/backend/utils/mmgr/mcxt.c", Line: 220)
!(0 <(size) && (size) <= (0xfffffff)) (0) [No such file or directory]
Submitted by: Dirk Koeser <koeser@informatik.uni-rostock.de>
Here's a small patch that my run-time checker whines about
incessantly. The justification for the patch is along the
lines of passing a NULL is allowed if you have an
arguement that is a *POINTER* to something, but if
the arguement is an array reference, it's not really
a "pointer", so it can't be NULL.
If you question this, I refer you to
<URL:http://www.va.pubnix.com/staff/djm/lore/arrays-are-not-pointers>
Anyways, here's the patch:
-Kurt
Submitted by: "Kurt J. Lidl" <lidl@va.pubnix.com>
> INDEXED searches in some cases DO NOT WORK.
> Although simple search expressions (i.e. with a constant value on
> the right side of an operator) work, performing a join (by putting
> a field of some other table on the right side of an operator) produces
> empty output.
> WITHOUT indices, everything works fine.
>
submitted by: "Vadim B. Mikheev" <root@ais.sable.krasnoyarsk.su>