Hiroshi. ReleaseRelationBuffers now removes rel's buffers from pool,
instead of merely marking them nondirty. The old code would leave valid
buffers for a deleted relation, which didn't cause any known problems
but can't possibly be a good idea. There were several places which called
ReleaseRelationBuffers *and* FlushRelationBuffers, which is now
unnecessary; but there were others that did not. FlushRelationBuffers
no longer emits a warning notice if it finds dirty buffers to flush,
because with the current bufmgr behavior that's not an unexpected
condition. Also, FlushRelationBuffers will flush out all dirty buffers
for the relation regardless of block number. This ensures that
pg_upgrade's expectations are met about tuple on-row status bits being
up-to-date on disk. Lastly, tweak BufTableDelete() to clear the
buffer's tag so that no one can mistake it for being a still-valid
buffer for the page it once held. Formerly, the buffer would not be
found by buffer hashtable searches after BufTableDelete(), but it would
still be thought to belong to its old relation by the routines that
sequentially scan the shared-buffer array. Again I know of no bugs
caused by that, but it still can't be a good idea.
RowExclusive (my fault). Also, install a check to prevent people
from trying COPY BINARY to stdout/from stdin. No way that will
work unless we redesign the frontend COPY protocol ... which is
not worth the trouble in the near future ...
is in <string> and not in <string.h> on QNX4/egcs-2.91.60.
Probably this can be changed for all platforms. The test in line 1705 uses
<string> as well. Because I am not sure, I havn't this included into the
patch.
doc/Makefile has to be sligthly modified as it has been done for
src/backend/Makefile due to a QNX4 problem (patch attached)
Furthermore src/test/regress/run_check.sh needs to be patched as it has been
done for regress.sh (patch attached). Please note that in the patch the
postmaster is started always with the -i option.
run_check.sh reports the test "limit" as failed, but in reallity it is OK.
regress.sh reports it as OK.
Andreas Kardos
IRIX systems using the native compilers. A summary is:
- Various files use "//" as a comment delimiter in c files.
- Problems caused by assuming "char" is signed.
cash.in: building -signed the rules regression test fails as described
in FAQ_QNX4. If CHAR_MAX is "255U" then ((signed char)CHAR_MAX) is -1.
postmaster.c: random number regression test failed without this change.
- Some generic build issues and warning message cleanup.
David Kaelbling
just use the portable form,
tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz
There were a bunch of places that weren't paying attention to configure's
result anyway (including configure itself!?); clean them up too.
I'm including a diff of
postgresql-7.0/src/interfaces/jdbc/org/postgresql/jdbc2/ResultSet.java.
I've clearly marked all the fixes I did. Would *someone* who has access
to the cvs please put this in?
Joseph Shraibman
days. It seems to be a FAQ, and I think I know why. When creating a 'c'
language function, CREATE FUNCTION is fed the shared object filename,
and seems to succeed. Only when trying to use the function is an error
thrown, by which time the coder thinks something's wrong with executing
the code, not with loading it.
I think I once saw it proposed to load shared objects at function creation
time, but that idea was shot down on the grounds of resident memory bloat,
ISTR. Here's a patch for a compromise: all it does is stat() the file,
just like the loader code does, so that the errors caused by non existent
files, and no directory 'x' permissions (the most common ones, it seems),
get caught while the developer is still thinking about code loading. It
doesn't catch all errors (like the code not being readable by the postgres
user) but seems to catch the most common, without actually opening the file.
What do you think?
Ross
indexes, apparently, nor on functional indexes with more than one input
column (force of natts = 1 was in the wrong branch of IF statement).
Coredumped if source relation contained any uncommitted tuples, due to
failure to test for success return from heap_fetch. Fetched tuple
was passed directly to heap_insert, which clobbers the TID and commit
status in the tuple header it's given, which meant that the source
relation's tuples all got trashed as the copy proceeded. Abort partway
through, and you're left with a lot of missing tuples.
I wonder what else is lurking here ...
under FreeBSD ... basically, if setproctitle() exists, use it ...
the draw back right now is the PS_SET_STATUS stuff doesn't work, but am looking
into that one right now ... at lesat now you can see who is connecting where
and from where ...
mappings. In fact, it had them backward because it was using the 6.5.*
code. Copied them from parser/gram.y, so it is fixed now. Looks like
our first 7.0.1 fix. Oops, seems Tom has beat me to it as I was typing
this.
Rearrange handling of VACUUMs so that they are certain to be executed
as superuser not some random user; also, do not forget to vacuum
template1 itself.
pg_char_to_encoding() in multibyte disbaled case so that it does not
throw an error, rather return HARD CODED default value (currently SQL_ASCII).
This would solve the "non-mb backend vs. mb-enabled frontend" problem.
cleanup, ie, as soon as we have caught the longjmp. This ensures that
current context will be a valid context throughout error cleanup. Before
it was possible that current context was pointing at a context that would
get deleted during cleanup, leaving any subsequent pallocs in deep
trouble. I was able to provoke an Assert failure when compiled with
asserts + -DCLOBBER_FREED_MEMORY, if I did something that would cause
an error to be reported by the backend large-object code, because indeed
that code operates in a context that gets deleted partway through xact
abort --- and CurrentMemoryContext was still pointing at it! Boo hiss.
cases where joinclauses were present but some joins have to be made
by cartesian-product join anyway. An example is
SELECT * FROM a,b,c WHERE (a.f1 + b.f2 + c.f3) = 0;
Even though all the rels have joinclauses, we must join two of them
in cartesian style before we can use the join clause...
than BIND_DEFERRED. That way, if the loaded library has unresolved
references, shl_load fails cleanly. As we had it, shl_load would
succeed and then the dynlinker would call abort() when we try to call
into the loaded library. abort()ing a backend is uncool.
always failed if Perl makefile's INSTALLSITELIB variable was specified
in terms of another variable. Fix by adding an echo-installdir target
to the Perl makefile, which the upper-level Makefile can invoke.
libpq++.h contained copies of the class declarations in the other libpq++
include files, which was bogus enough, but the declarations were not
completely in step with the real declarations. Remove these in favor
of including the headers with #include. Make PgConnection destructor
virtual (not absolutely necessary, but seems like a real good idea
considering the number of subclasses derived from it). Give all classes
declared private copy constructors and assignment operators, to prevent
compiler from thinking it can copy these objects safely.
compiler than the one selected to build Postgres with. It was trying
to feed Postgres-compiler switches to Tcl's compiler. (Seen this before
with the perl5 interface...) Fix to use only CFLAGS taken from Tcl's
configure information, plus -I which is pretty universal.
unless you feed it -Aa or -Ae switch. Autoconf does not know about this,
but we can fix it in the hpux_cc template file. I knew templates were
good for something ;-)
to give wrong results: it should be looking at inJoinSet not inFromCl.
Also, make 'modified' flag be local to ApplyRetrieveRule: we should
append a rule's quals to the query iff that particular rule applies,
not if we have fired any previously-considered rule for the query!
(SELECT FROM table*). Cause was reference to 'eref' field of an RTE,
which is null in an RTE loaded from a stored rule parsetree. There
wasn't any good reason to be touching the refname anyway...
table for an average of NTUP_PER_BUCKET tuples/bucket, but cost_hashjoin
was assuming a target load of one tuple/bucket. This was causing a
noticeable underestimate of hashjoin costs.
(LIKE and regexp matches). These are not yet referenced in pg_operator,
so by default the system will continue to use eqsel/neqsel.
Also, tweak convert_to_scalar() logic so that common prefixes of strings
are stripped off, allowing better accuracy when all strings in a table
share a common prefix.
subsequent elogs() in the same COPY operation to display the wrong
line number. Fix is to clear lineno only when elog level is such
that we will not return to caller.
repaired psql option scanning bug (special treatment to \g |pipe)
fixed ipcclean makefile
made configure look for Perl to handle psql help build gracefully
extern int inet_aton(const char *cp, struct in_addr * addr);
appearing before the optional #define for const, which was certain
to fail on a machine with neither const nor inet_aton().
compiler will understand them. configure may have #define'd them to
empty because the local C compiler doesn't understand them, but this
may very well cause a C++ compilation to fail, so don't do it in C++.
all platforms, not just SCO. The operation is undefined for Unix-domain
sockets anyway. It seems SCO is not the only platform that complains
instead of treating the call as a no-op.
include the version from backend/port into libpq.
There is a second-rate implementation of inet_aton() already present
in fe-connect.c, #ifdef'd WIN32. That ought to be removed in favor
of using the better version from port/. However, since I'm not in a
position to test the WIN32 code, I will leave well enough alone for
this release...
contained a sub-SELECT nested within an AND/OR tree that cnfify()
thought it should rearrange. Same physical sub-SELECT node could
end up linked into multiple places in resulting expression tree.
This is harmless for most node types, but not for SubLink.
Repair bug by making physical copies of subexpressions that get
logically duplicated by cnfify(). Also, tweak the heuristic that
decides whether it's a good idea to do cnfify() --- we don't really
want that to happen when it would cause multiple copies of a subselect
to be generated, I think.
whether to do fsync or not, and if so (which should be seldom) just
do the fsync immediately. This way we need not build data structures
in md.c/fd.c for blind writes.
logged queries to 1024, truncating longer queries. That is about half of
the size I need (I have a union that is 2K long). Can someone consider
bumping it to 4K or so? Patch attached...
Regards,
Ed Loehr
as a shared dirtybit for each shared buffer. The shared dirtybit still
controls writing the buffer, but the local bit controls whether we need
to fsync the buffer's file. This arrangement fixes a bug that allowed
some required fsyncs to be missed, and should improve performance as well.
For more info see my post of same date on pghackers.
than not knowing what they are at all. Perhaps they should have their own
type category? Hard to say. In the meantime, doing it this way allows
SELECT 'unknown' || 'unknown' to continue being resolved as textcat,
instead of spitting out an ambiguous-operator error.
parse node types. This allows these statements to be placed in a plpgsql
function. Also, see to it that statement types not handled by the copy
logic will draw an appropriate elog(ERROR), instead of leaving a null
pointer that will cause coredump later on. More utility statements could
be added if anyone felt like turning the crank.
Add a random number generator and seed setter (random(), SET SEED)
Fix up the interval*float8 math to carry partial months
into the time field.
Add float8*interval so we have symmetry in the available math.
Fix the parser and define.c to accept SQL92 types as field arguments.
Fix the parser to accept SQL92 types for CREATE TYPE, etc. This is
necessary to allow...
Bit/varbit support in contrib/bit cleaned up to compile and load
cleanly. Still needs some work before final release.
Implement the "SOME" keyword as a synonym for "ANY" per SQL92.
Implement ascii(text), ichar(int4), repeat(text,int4) to help
support the ODBC driver.
Enable the TRUNCATE() function mapping in the ODBC driver.
Ensure that outer tuple link needed for inner indexscan qual evaluation
gets set in the EvalPlanQual case. This stops coredump, but we still
have resource leaks due to failure to clean up EvalPlanQual properly...
It does work with the following patch applied and gcc 2.95.2 .
Use --with-template=aix_gcc to compile the whole lot with gcc.
The geometry regression test produces different precision.
With optimization I run into regression failures starting at oidjoins,
thus no -O2. Anybody else try gcc 2.95.2 and -O2 on beta4 ?
This is an important patch, since recent versions of the IBM compiler
are not for free, and thus most questions I get concern gcc.
Andreas
PS.: I am testing with beta4
xact abort state in pg_exec_query_dest, we should continue scanning the
querytree list, on the off chance that one of the later queries in the
string is COMMIT or ROLLBACK.
would crash, due to premature invocation of SetQuerySnapshot(). Clean
up problems with handling of multiple queries by splitting
pg_parse_and_plan into two routines. The old code would not, for
example, do the right thing with END; SELECT... submitted in one query
string when it had been in transaction abort state, because it'd decide
to skip planning the SELECT before it had executed the END. New
arrangement is simpler and doesn't force caller to plan if only
parse+rewrite is needed.
be an expression not just a simple Var, so long as only one table is
referenced (so that code isn't really any more difficult than before).
This whole thing is still fundamentally bogus, but at least we can accept
a few more cases than before.
WHERE in a place where it can be part of a nestloop inner indexqual.
As the code stood, it put the same physical sub-Plan node into both
indxqual and indxqualorig of the IndexScan plan node. That confused
later processing in the optimizer (which expected that tracing the
subPlan list would visit each subplan node exactly once), and would
probably have blown up in the executor if the planner hadn't choked first.
Fix by making the 'fixed' indexqual be a complete deep copy of the
original indexqual, rather than trying to share nodes below the topmost
operator node. This had further ramifications though, because we were
making the aforesaid list of sub-Plan nodes during SS_process_sublinks
which is run before construction of the 'fixed' indexqual, meaning that
the copy of the sub-Plan didn't show up in that list. Fix by rearranging
logic so that the sub-Plan list is built by the final set_plan_references
pass, not in SS_process_sublinks. This may sound like a mess, but it's
actually a good deal cleaner now than it was before, because we are no
longer dependent on the assumption that planning will never make a copy
of a sub-Plan node.
Should be more robust to overflows.
Pass through an unmapped function unchanged, rather than rejecting it.
Add a few more functions, but comment out those which can go through as-is.
Can be used with contrib/odbc/ package, though that isn't committed yet.
here is an updated version of the bit type with a bugfix and all the necessa
ry
SQL functions defined. This should replace what is currently in contrib. I'd
appreciate any comments on what is there.
Kind regards,
Adriaan
pg_internal.init file in-place, which meant that if another backend
started at about the same time, it might read the incomplete file.
init_irels tries to guard against that, but I have now seen a crash
due to reading bad data from a partly-written file. (This may indicate
a kernel bug on my platform? Not sure.) Anyway, clearly the safest
course is to write the new pg_internal.init file under a unique temporary
filename, and rename it into place only after it's all written.
- I was unable to compile ecpg due to the ":=" instead of "=" in defining
LIBPQDIR and some other variables in Makefile.global.in
- pg_id (and also pg_encoding) executable was not removed during "make
clean" - there was no $(X) appended to the executable name for rm
- I have added result for int2, int4, float8 and geometry regression tests
- int2, int2 - yet another message for too large numbers ;-)
- float8 - it is problably a bug in the newlib C library - it has no
error message for numbers with exponent -400
- geometry - differences in precision of float numbers
- I have added appropriate lines into resultmap file
- I have modified the script regress.sh to use "case" statement when testing
the hostname. For cygwin the script is called with "i686-pc-cygwin" (on my
machine) as a parameter and this was not catched with the "if" statement.
The check was done for PORTNAME (win) and not HOSTNAME (i.86-pc-cygwin*).
The patch for described modifications is included.
All this modifications can be applied to "current" tree too.
The compilation was done on CygwinB20.1 with gcc 2.95, cygipc library 1.05.
The binaries were able to run also on the newest development snapshot
(2000-03-25).
Dan