silently ignored, allowing one to write bizarre things like
DECLARE x setof int;
in plpgsql. This has misled at least one novice into thinking that
plpgsql variables could be sets ...
had to do in DECLARE CURSOR. AFAICS these are all the places affected.
PREPARE case per example from Michael Fuhr, EXPLAIN case located by
grepping for planner calls ...
(rd_att) field of a nailed-in-cache relcache entry. This fixes the bug
reported by Alvaro 8-Dec-2004; I believe it probably also explains
Grant Finnemore's report of 10-Sep-2004.
In an unrelated change in the same file, put back 7.4's response to
failure to rename() the relcache init file, ie, unlink the useless
temp file. I did not put back the warning message, since there might
actually be some reason not to have that.
of an inheritance child table is binary-compatible with the rowtype of
its parent, invent an expression node type that does the conversion
correctly. Fixes the new bug exhibited by Kris Shannon as well as a
lot of old bugs that would only show up when using multiple inheritance
or after altering the parent table.
is null-terminated. I think this is not a real bug because the parser
would always have truncated the identifier to NAMEDATALEN-1 already,
but let's be safe. Per report from Klocwork.
> throughout to the spellings suggested by your book.
Great.
A follow-up patch for current CVS HEAD is attached, and available at
http://troels.arvin.dk/db/pgsql/conformance/pgsql-sql-conformance-
followup.patch
The patch
- includes a core feature ID that had been left
out by mistake (C011)
- updates the sql_feature_packages.txt table to
reflect changes in SQL:2003 which were not
covered properly in my last patch
Troels Arvin
> seconds to 10 seconds. The original number was plucked from thin air
> some months ago, and I'd like to review that now based upon further
> thought, observation and experience.
>
> This change has little or no effect on performance, since the interval
> is there mainly to avoid repeated respawn attempts if archiver fails at
> startup. Archiver start-up time is very quick, so there is little danger
> of exceeding 10 seconds.
>
> On a busy system, if the archiver does die, then many files can build up
> in the 60 seconds before respawning. That xlog file backlog could take
> some time to clear. This then leaves a larger than normal window of data
> loss for a possibly long period.
>
> It's a minor change only, with no other effect on function.
Simon Riggs
the "ps" argument list on Unix - meaning that there is no way to
identify for example the stats processors or the bgwriter.
This patch adds this functionality, in a bit of a crufty way. It creates
a kernel Event object with the name of what would be in the title. This
can be viewed using for example Process Explorer.
It's been very handy for me during both debugging and using. I haven't
figured a better way, but perhaps someone has one that's less crufty? If
not, here is at least a working patch :-)
Magnus Hagander
reasons I outlined in pghackers a few days ago.
Also, undo someone's overly optimistic decision to reduce tuple state
checks from if (...) elog() to Asserts. If I trusted this code more,
I might think it was a good idea to disable these checks in production
installations. But I don't.
escapes --- they aren't simply quoted characters. Problem noted by
Antti Salmela. Also fix problem with incorrect handling of multibyte
characters when followed by a quantifier.
In particular, there was a mathematical tie between the two possible
nestloop-with-materialized-inner-scan plans for a join (ie, we computed
the same cost with either input on the inside), resulting in a roundoff
error driven choice, if the relations were both small enough to fit in
sort_mem. Add a small cost factor to ensure we prefer materializing the
smaller input. This changes several regression test plans, but with any
luck we will now have more stability across platforms.
a relation's number of blocks, rather than the possibly-obsolete value
in pg_class.relpages. Scale the value in pg_class.reltuples correspondingly
to arrive at a hopefully more accurate number of rows. When pg_class
contains 0/0, estimate a tuple width from the column datatypes and divide
that into current file size to estimate number of rows. This improved
methodology allows us to jettison the ancient hacks that put bogus default
values into pg_class when a table is first created. Also, per a suggestion
from Simon, make VACUUM (but not VACUUM FULL or ANALYZE) adjust the value
it puts into pg_class.reltuples to try to represent the mean tuple density
instead of the minimal density that actually prevails just after VACUUM.
These changes alter the plans selected for certain regression tests, so
update the expected files accordingly. (I removed join_1.out because
it's not clear if it still applies; we can add back any variant versions
as they are shown to be needed.)
prevents problems when the DECLARE is in a portal and is executed
repeatedly, as is possible in v3 protocol. Per analysis by Oliver
Jowett, though I didn't use his patch exactly.
error conditions during regexp compile, but not during regexp execution;
any sort of "can't happen" errors would be treated as no-match instead
of being reported as they should be. Noticed while trying to duplicate
a reported Tcl bug.
to be processed by GUC before InitPostgres, because any required lookup
of the encoding conversion function has to be done during InitializeClientEncoding.
So, I broke this last week by moving GUC processing to after InitPostgres :-(.
What we can do as a compromise is process non-SUSET variables during
command line scanning (the same as before), and postpone the processing
of only SUSET variables. None of the SUSET variables need to be set
before InitPostgres.
fill factor has been exceeded. We usually run with ffactor == 1, but
the way the test was coded, it wouldn't split a bucket until the actual
fill factor reached 2.0, because of use of integer division. Change
from > to >= so that it will split more aggressively when the table
starts to get full.
few cycles during transaction exit. A typical session probably
wouldn't have as many as half a dozen portals open at once, so the
original value of 64 seems far larger than needed.
no need for it to be nearly as big as the global hash table, and since
it's not in shared memory it can grow if it does need to be bigger.
By reducing the size, we speed up hash_seq_search(), which saves a
significant fraction of subtransaction entry/exit overhead.
to the original List; per report from Sebastian BÎck. I think this is
the last such bug --- I examined every lcons() call in the backend and
the rest seem OK --- but it's nervous-making that we're still finding
'em so many months after the List rewrite went in.
collector until the transaction commits. Per recent discussion, this
should avoid confusing autovacuum when an updating transaction runs for
a long time.
this is to avoid scenarios where incoming backends find no live copies
of a database's row because the only live copy is in an as-yet-unwritten
shared buffer, which they can't see. Also, use FlushRelationBuffers()
for forcing out pg_database, instead of the much more expensive BufferSync().
There's no need to write out pages belonging to other relations.
Rather than using ReadBuffer() to increment the reference count on an
already-pinned buffer, we should use IncrBufferRefCount() as it is
faster and does not require acquiring the BufMgrLock.
even uglier than it was already :-(. Also, on Windows only, use temporary
shared memory segments instead of ordinary files to pass over critical
variable values from postmaster to child processes. Magnus Hagander
more than 65K columns, or when the created table has more than 65K columns
due to adding inherited columns from parent relations. Fix a similar
crash when processing SELECT queries with more than 65K target list
entries. In all three cases we would eventually detect the error and
elog, but the check was being made too late.
We don't really want to start a new SPI connection, just keep using the old
one; otherwise we have memory management problems as illustrated by
John Kennedy's bug report of today. This requires a bit of a hack to
ensure the SPI stack state is properly restored, but then again what we
were doing before was a hack too, strictly speaking. Add a regression
test to cover this case.
plain SUSET instead. Also delay processing of options received in
client connection request until after we know if the user is a superuser,
so that SUSET values can be set that way by legitimate superusers.
Per recent discussion.
buffer is valid, as ReadBuffer() will elog on error. Most of the call
sites of ReadBuffer() got this right, but this patch fixes those call
sites that did not.
> pg specific, like "PostgreSQL.1". I have not done this since a new compile
> would not detect a running old beta. But now would be the time (or never).
Zeugswetter Andreas
shared memory segment ID. If we can't access the existing shmem segment,
it must not be relevant to our data directory. If we can access it,
then attach to it and check for an actual match to the data directory.
This should avoid some cases of failure-to-restart-after-boot without
introducing any significant risk of failing to detect a still-running
old backend.
estimates when combining the estimates for a range query. As pointed out
by Miquel van Smoorenburg, the existing check for an impossible combined
result would quite possibly fail to detect one default and one non-default
input. It seems better to use the default range query estimate in such
cases. To do so, add a check for an estimate of exactly DEFAULT_INEQ_SEL.
This is a bit ugly because it introduces additional coupling between
clauselist_selectivity and scalarltsel/scalargtsel, but it's not like
there wasn't plenty already...
working as intended --- for some reason, FROM a.b.c was getting
parsed as if it were a function name and not a qualified name.
I think there must be a bug in bison, because it should have
complained that the grammar was ambiguous. Anyway, fix it along
the same lines previously used for func_name vs columnref, and get
rid of the right-recursion in attrs that seems to have confused
bison.
type-and-length coercion function, make sure that the coercion function
is told the correct typmod. Fixes Kris Jurka's example of a domain
over bit(N).
clause implicitly whenever one is not given explicitly. Remove concept
of a schema having an associated tablespace, and simplify the rules for
selecting a default tablespace for a table or index. It's now just
(a) explicit TABLESPACE clause; (b) default_tablespace if that's not an
empty string; (c) database's default. This will allow pg_dump to use
SET commands instead of tablespace clauses to determine object locations
(but I didn't actually make it do so). All per recent discussions.
to DAY precision or coarser; leave the timezone alone when precision is
HOUR or less. This avoids surprises for inputs near a DST transition
time, as per example from Matthew Gabeler-Lee. (The only reason we
recalculate at all is so that outputs that are supposed to represent
days will come out as local midnight, and that's not relevant for sub-day
precision.)
use it, as per my proposal of yesterday. This gives us a means of
determining the zone offset to impute to an unlabeled timestamp that
is both efficient and reliable, unlike all our previous tries involving
mktime() and localtime(). The behavior for invalid or ambiguous times
at a DST transition is fixed to be really and truly "assume standard
time", fixing a bug that has come and gone repeatedly but was back
again in 7.4. (There is some ongoing discussion about whether we should
raise an error instead, but for the moment I'll make it do what it was
previously intended to do.)
of HeapTupleSatisfiesItself() to trigger a hint-bit update on the tuple:
if the row was updated or deleted by a subtransaction of my own transaction
that was later rolled back. This cannot occur in pre-8.0 of course, so
the hint-bit patch applied a couple weeks ago is OK for existing releases.
But for 8.0 it seems we had better fix things so that RI_FKey_check can
pass the correct buffer number to HeapTupleSatisfiesItself. Accordingly,
add fields to the TriggerData struct to carry the buffer ID(s) for the
old and new tuple(s). There are other possible solutions but this one
seems cleanest; it will allow other AFTER-trigger functions to safely
do tqual.c calls if they want to. Put new fields at end of struct so
that there is no API breakage.
We can't regurgitate the unconverted string as I first thought, because
the elog.c mechanisms will assume the error message data is in the server
encoding and attempt a reverse conversion. Eventually it might be worth
providing a short-circuit path to support this, but for now the simplest
solution is to abandon trying to report back the line contents after a
conversion failure. Per bug report from Sil Lee, 27-Oct-2004.
'recycled log files' and 'removed log files' messages from DEBUG1 to
DEBUG2, replacing them with a count of files added/removed/recycled in
the checkpoint end message, as per suggestion from Simon Riggs.
files and directories. This ensures that the bgwriter will close any open
file references it is holding for files therein, which is needed for the
rmdir() to succeed. Andrew Dunstan and Tom Lane.
try to display it as a reference to the underlying column instead. This
is a legitimate substitution (it wouldn't be for a named join) and it
fixes some cases where the display would otherwise be ambiguous. Per
example from Sim Zacks.
in all cases when keep_buf = true. This allows ANALYZE's inner loop to
use heap_release_fetch, which saves multiple buffer lookups for the same
page and avoids overestimation of cost by the vacuum cost mechanism.
returning a NULL pointer (some callers remembered to check the return
value, but some did not -- it is safer to just bail out).
Also, cleanup pgstat.c to use elog(ERROR) rather than elog(LOG) followed
by exit().
examinable by non-superusers, and use it to protect the recently-added
GUC variables for data directory and config files. For now I have only
flagged those variables that could be used to deduce something about
the server's filesystem layout, but possibly we should also mark vars
related to logging settings and other admin-only information?
at the top level of the column's old default expression before adding
an implicit coercion to the new column type. This seems to satisfy the
principle of least surprise, as per discussion of bug #1290.
NO ACTION check is deferrable. This seems to be a closer approximation
to what the SQL spec says than what we were doing before, and it prevents
some anomalous behaviors that are possible now that triggers can fire
during the execution of PL functions.
Stephan Szabo.
to make life cushy for the JDBC driver. Centralize the decision-making
that affects this by inventing a get_type_func_class() function, rather
than adding special cases in half a dozen places.
an oversize message, per suggestion from Oliver Jowett. I'm a bit
dubious that this is a real problem, since the client likely doesn't
have any more space available than the server, but it's not hard to
make it behave according to the protocol intention.
This does not disable the bgwriter process: it still has to wake up often
enough to collect fsync requests from backends in a timely fashion. But
it responds to the recent gripe about not being able to prevent the disk
from being spun up constantly.
specifies a new default tablespace and the template database already has
some tables in that tablespace. There isn't any way to solve this fully
without modifying the clone database's pg_class contents, so for now the
best we can do is issue a better error message.
only covered the case of assigning "", and failed to recognize that
actually setlocale(LC_MESSAGES,...) does not work at all on this platform.
Magnus Hagander, some code prettification by Tom Lane.
pins at end of transaction, and reduce AtEOXact_Buffers to an Assert
cross-check that this was done correctly. When not USE_ASSERT_CHECKING,
AtEOXact_Buffers is a complete no-op. This gets rid of an O(NBuffers)
bottleneck during transaction commit/abort, which recent testing has shown
becomes significant above a few tens of thousands of shared buffers.
(if any) currently waited for by LockBufferForCleanup(), which is all
that we were using it for anymore. Saves some space and eliminates
proportional-to-NBuffers slowdown in UnlockBuffers().
http://archives.postgresql.org/pgsql-hackers/2004-10/msg00464.php.
This fix is intended to be permanent: it moves the responsibility for
calling SetBufferCommitInfoNeedsSave() into the tqual.c routines,
eliminating the requirement for callers to test whether t_infomask changed.
Also, tighten validity checking on buffer IDs in bufmgr.c --- several
routines were paranoid about out-of-range shared buffer numbers but not
about out-of-range local ones, which seems a tad pointless.
C:\msys\1.0\home\y-asaba>pg_ctl -D data restart
waiting for postmaster to shut down...LOG: received smart shutdown
request.
LOG: shutting down
LOG: database system is shut down
done
postmaster stopped
postmaster starting
C:\msys\1.0\home\y-asaba>postmaster.exe: invalid argument: "'-D'"
Try "postmaster.exe --help" for more information.
Yoshiyuki Asaba
- remove another senseless "extern" keyword that was applied to a
function definition
- change a foo more function signatures from "some_type foo()" to
"some_type foo(void)"
- rewrite another K&R style function definition
- make the type of the "action" function pointer in the KeyWord struct
in src/backend/utils/adt/formatting.c more precise
for scanning one term of an OR clause if the index's predicate is implied
by that same OR clause term (possibly in conjunction with top-level WHERE
clauses). Per recent example from Dawid Kuroczko,
http://archives.postgresql.org/pgsql-performance/2004-10/msg00095.php
Also, fix a very long-standing bug in index predicate testing, namely the
bizarre ordering of decomposition of predicate and restriction clauses.
AFAICS the correct way is to break down the predicate all the way, and
then for each component term see if you can prove it from the entire
restriction set. The original coding had a purely-implementation-artifact
distinction between ANDing at the top level and ANDing below that, and
proceeded to get the decomposition order wrong everywhere below the top
level, with the result that even slightly complicated AND/OR predicates
could not be proven. For instance, given
create index foop on foo(f2) where f1=42 or f1=1
or (f1 = 11 and f2 = 55);
the old code would fail to match this index to the query
select * from foo where f1 = 11 and f2 = 55;
when it obviously ought to match.
parent table's tablespace, as per gripe from Michael Kleiser. Choose
a more plausible column order for this view and pg_tables. Update
documentation of these views, which was missed in original patch.
- replace some function signatures of the form "some_type foo()" with
"some_type foo(void)"
- replace a few instances of a literal 0 being used as a NULL pointer;
there are more instances of this in the code, but I just fixed a few
- in src/backend/utils/mb/wstrncmp.c, replace K&R style function
declarations with ANSI style, remove use of 'register' keyword
- remove an "extern" modifier that was applied to a function definition
(rather than a declaration)
The vars are renamed to data_directory, config_file, hba_file, and
ident_file, and are guaranteed to be set to accurate absolute paths
during postmaster startup.
This commit does not yet do anything about hiding path values from
non-superusers.
Refactor code into something reasonably understandable, cause
use of the feature to not fail in standalone backends or in
EXEC_BACKEND case, fix sloppy guc.c table entries, make the
documentation minimally usable.
list elements comma-separated instead of barfing. This allows elimination
of half a dozen redundant copies of that behavior, and also makes the
world safe again for pg_get_expr() applied to pg_index.indexprs, per gripe
from Alexander Zhiltsov.
columns. The returned tuple needs to have appropriate NULL columns
inserted so that it actually matches the declared rowtype. It seemed
convenient to use a JunkFilter for this, so I made some cleanups and
simplifications in the JunkFilter code to allow it to support this
additional functionality. (That in turn exposed a latent bug in
nodeAppend.c, which is that it was returning a tuple slot whose
descriptor didn't match its data.) Also, move check_sql_fn_retval
out of pg_proc.c and into functions.c, where it seems to more naturally
belong.