> a_ogawa <a_ogawa@hi-ho.ne.jp> writes:
> > It is a reasonable idea. However, the majority part of MemSet was not
> > able to be avoided by this idea. Because the per-tuple contexts are used
> > at the early stage of executor.
>
> Drat. Well, what about changing that? We could introduce additional
> contexts or change the startup behavior so that the ones that are
> frequently reset don't have any data in them unless you are working
> with pass-by-ref values inside the inner loop.
That might be possible. However, I think that we should change only
aset.c about this article.
I thought further: We can check whether context was used from the last
reset even when blocks list is not empty. Please see attached patch.
The effect of the patch that I measured is as follows:
o Execution time that executed the SQL ten times.
(1)Linux(CPU: Pentium III, Compiler option: -O2)
- original: 24.960s
- patched : 23.114s
(2)Linux(CPU: Pentium 4, Compiler option: -O2)
- original: 8.730s
- patched : 7.962s
(3)Solaris(CPU: Ultra SPARC III, Compiler option: -O2)
- original: 37.0s
- patched : 33.7s
Atsushi Ogawa (a_ogawa)
performance problem pointed out by phil@vodafone: to wit, we were
spending O(N^2) time to check dropped-ness in an N-deep join tree,
even in the case where the tree was freshly constructed and couldn't
possibly mention any dropped columns. Instead of recursing in
get_rte_attribute_is_dropped(), change the data structure definition:
the joinaliasvars list of a JOIN RTE must have a NULL Const instead
of a Var at any position that references a now-dropped column. This
costs nothing during normal parse-rewrite-plan path, and instead we
have a linear-time update to make when loading a stored rule that
might contain now-dropped columns. While at it, move the responsibility
for acquring locks on relations referenced by rules into this separate
function (which I therefore chose to call AcquireRewriteLocks).
This saves effort --- namely, duplicated lock grabs in parser and rewriter
--- in the normal path at a cost of one extra non-locked heap_open()
in the stored-rule path; seems a good tradeoff. A fringe benefit is
that it is now *much* clearer that we acquire lock on relations referenced
in rules before we make any rewriter decisions based on their properties.
(I don't know of any bug of that ilk, but it wasn't exactly clear before.)
Instead of a separate CRC on each backup block, include backup blocks
in their parent WAL record's CRC; this is important to ensure that the
backup block really goes with the WAL record, ie there was not a page
tear right at the start of the backup block. Implement a simple form
of compression of backup blocks: drop any run of zeroes starting at
pd_lower, so as not to store the unused 'hole' that commonly exists in
PG heap and index pages. Tweak PageRepairFragmentation and related
routines to ensure they keep the unused space zeroed, so that the above
compression method remains effective. All per recent discussions.
and RelationNameGetTupleDesc() as deprecated; remove uses of the
latter in the contrib library. Along the way, clean up crosstab()
code and documentation a little.
key, compare the new and old row versions. If the foreign key column has
not changed, we needn't enqueue the trigger, since the update cannot
violate the foreign key. This optimization was previously applied in the
RI trigger function, but it is more efficient to avoid firing the trigger
altogether. Per recent discussion on pgsql-hackers.
Also add a regression test for some unintuitive foreign key behavior, and
refactor some code that deals with the OIDs of the various RI trigger
functions.
status of the most recently queried userid. Since the common pattern is
many successive queries about the same user (ie, the current user) this
can save a lot of syscache probes.
spotted by Qingqing Zhou. The HASH_ENTER action now automatically
fails with elog(ERROR) on out-of-memory --- which incidentally lets
us eliminate duplicate error checks in quite a bunch of places. If
you really need the old return-NULL-on-out-of-memory behavior, you
can ask for HASH_ENTER_NULL. But there is now an Assert in that path
checking that you aren't hoping to get that behavior in a palloc-based
hash table.
Along the way, remove the old HASH_FIND_SAVE/HASH_REMOVE_SAVED actions,
which were not being used anywhere anymore, and were surely too ugly
and unsafe to want to see revived again.
from a RECORD Const node, because that's what it may be faced with
after constant-folding of a function returning RECORD. Per example
from Michael Fuhr.
routines in the index's relcache entry, instead of doing a fresh fmgr_info
on every index access. We were already doing this for the index's opclass
support functions; not sure why we didn't think to do it for the AM
functions too. This supersedes the former method of caching (only)
amgettuple in indexscan scan descriptors; it's an improvement because the
function lookup can be amortized across multiple statements instead of
being repeated for each statement. Even though lookup for builtin
functions is pretty cheap, this seems to drop a percent or two off some
simple benchmarks.
Display only 9 not 10 digits of precision for timestamp values when
using non-integer timestamps. This prevents the display of rounding
errors for common values like days < 32.
working buffer into ParseDateTime() and reject too-long input there,
rather than checking the length of the input string before calling
ParseDateTime(). The old method was bogus because ParseDateTime() can use
a variable amount of working space, depending on the content of the
input string (e.g. how many fields need to be NUL terminated). This fixes
a minor stack overrun -- I don't _think_ it's exploitable, although I
won't claim to be an expert.
Along the way, fix a bug reported by Mark Dilger: the working buffer
allocated by interval_in() was too short, which resulted in rejecting
some perfectly valid interval input values. I added a regression test for
this fix.
using pg_mblen. Therefore, pg_mblen is executed many times, and it
becomes a bottleneck.
This patch makes a short cut, and reduces execution frequency of
pg_mblen by comparing the first byte first.
a_ogawa
from Abhijit Menon-Sen, minor editorialization from Neil Conway. Also,
improve md5(text) to allocate a constant-sized buffer on the stack
rather than via palloc.
Catalog version bumped.
communication structure, and make it its own module with its own lock.
This should reduce contention at least a little, and it definitely makes
the code seem cleaner. Per my recent proposal.
when the blocks list is empty (there can surely be no freelist items if
the context contains no memory), and use MemSetAligned not MemSet to
clear the headers (we assume alignof(pointer) >= alignof(int32)).
Per discussion with Atsushi Ogawa. He proposes some further hacking
that I'm not yet sold on, but these two changes are unconditional wins
since there is no case in which they make things slower.
AtCommit_Portals is restarted when a portal is deleted. This is
necessary since the deletion of a portal may cause the deletion of
another which on rare occations may cause the iterator to return a
deleted portal an thus a renewed attempt delete.
Thomas Hallgren
collector messages, per recent discussion on pgsql-patches. This
actually required quite a few changes -- for example,
"databaseid != InvalidOid" was used to check whether a slot in the
backend entry table was initialized, but that no longer works since
the slot might be initialized prior to receiving the BESTART message
which contains the database id. We now use procpid > 0 to indicate
that a slot is non-empty.
Other changes:
- various comment improvements and cleanups
- there's no need to zero-out the entire activity buffer in
pgstat_add_backend(), we can just set activity[0] to '\0'.
- remove the counting of the # of connections to a database; this
was not used anywhere
One change in behavior I wasn't sure about: previously, the code
would create a hash table entry for a database as soon as any message
was received whose header referenced that database. Now, we only
create hash table entries as needed (so for example BESTART won't
create a database hash table entry, since it doesn't need to
access anything in the per-db hash table). It would be easy enough
to retain the old behavior, but AFAICS it is not required.
memset() or MemSet() to a char *. For one, memset()'s first argument is
a void *, and further void * can be implicitly coerced to/from any other
pointer type.
* Add session start time to pg_stat_activity
* Add the client IP address and port to pg_stat_activity
Original patch from Magnus Hagander, code review by Neil Conway. Catalog
version bumped. This patch sends the client IP address and port number in
every statistics message; that's not ideal, but will be fixed up shortly.
before we check commit/abort status. Formerly this was done in some paths
but not all, with the result that a transaction might be considered
committed for some purposes before it became committed for others.
Per example found by Jan Wieck.
which is neither needed by nor related to that header. Remove the bogus
inclusion and instead include the header in those C files that actually
need it. Also fix unnecessary inclusions and bad inclusion order in
tsearch2 files.
associated with a hashtable is allocated in that hashtable's private
context, so that hash_destroy only has to destroy the context and not
do any retail pfree's; and tighten the inner loop of hash_seq_search.
output area as INTERNAL not CSTRING. This is to prevent people from
calling the functions by hand. This is a permanent solution for the
back branches but I hope it is just a stopgap for HEAD.
only one argument. (Per recent discussion, the option to accept multiple
arguments is pretty useless for user-defined types, and would be a likely
source of security holes if it was used.) Simplify call sites of
output/send functions to not bother passing more than one argument.
record object itself, rather than relying on a second OID argument to be
correct. This patch just changes the function behavior and not the
catalogs, so it's OK to back-patch to 8.0. Will remove the now-redundant
second argument in pg_proc in a separate patch in HEAD only.
a warning when a variable is used as a format string for printf()
and similar functions (if the variable is derived from untrusted
data, it could include unexpected formatting sequences). This
emits too many warnings to be enabled by default, but it does
flag a few dubious constructs in the Postgres tree. This patch
fixes up the obvious variants: functions that are passed a variable
format string but no additional arguments.
Most of these are harmless (e.g. the ruleutils stuff), but there
is at least one actual bug here: if you create a trigger named
"%sfoo", pg_dump will read uninitialized memory and fail to dump
the trigger correctly.
Essentially, we shoehorn in a lockable-object-type field by taking
a byte away from the lockmethodid, which can surely fit in one byte
instead of two. This allows less artificial definitions of all the
other fields of LOCKTAG; we can get rid of the special pg_xactlock
pseudo-relation, and also support locks on individual tuples and
general database objects (including shared objects). None of those
possibilities are actually exploited just yet, however.
I removed pg_xactlock from pg_class, but did not force initdb for
that change. At this point, relkind 's' (SPECIAL) is unused and
could be removed entirely.
to eliminate unnecessary deadlocks. This commit adds SELECT ... FOR SHARE
paralleling SELECT ... FOR UPDATE. The implementation uses a new SLRU
data structure (managed much like pg_subtrans) to represent multiple-
transaction-ID sets. When more than one transaction is holding a shared
lock on a particular row, we create a MultiXactId representing that set
of transactions and store its ID in the row's XMAX. This scheme allows
an effectively unlimited number of row locks, just as we did before,
while not costing any extra overhead except when a shared lock actually
has to be shared. Still TODO: use the regular lock manager to control
the grant order when multiple backends are waiting for a row lock.
Alvaro Herrera and Tom Lane.
node, as this behavior is now better done as a bitmap OR indexscan.
This allows considerable simplification in nodeIndexscan.c itself as
well as several planner modules concerned with indexscan plan generation.
Also we can improve the sharing of code between regular and bitmap
indexscans, since they are now working with nigh-identical Plan nodes.
of timetz values misbehaved in --enable-integer-datetime cases, and
EXTRACT(EPOCH) subtracted the zone instead of adding it in all cases.
Backpatch to all supported releases (except --enable-integer-datetime code
does not exist in 7.2).
logic operations during planning. Seems cleaner to create two new Path
node types, instead --- this avoids duplication of cost-estimation code.
Also, create an enable_bitmapscan GUC parameter to control use of bitmap
plans.
* Changes the APIs to the timezone functions to take a pg_tz pointer as
an argument, representing the timezone to use for the selected
operation.
* Adds a global_timezone variable that represents the current timezone
in the backend as set by SET TIMEZONE (or guc, or env, etc).
* Implements a hash-table cache of loaded tables, so we don't have to
read and parse the TZ file everytime we change a timezone. While not
necesasry now (we don't change timezones very often), I beleive this
will be necessary (or at least good) when "multiple timezones in the
same query" is eventually implemented. And code-wise, this was the time
to do it.
There are no user-visible changes at this time. Implementing the
"multiple zones in one query" is a later step...
This also gets rid of some of the cruft needed to "back out a timezone
change", since we previously couldn't check a timezone unless it was
activated first.
Passes regression tests on win32, linux (slackware 10) and solaris x86.
Magnus Hagander
whose keys are OIDs. The only one that looks particularly performance
critical is the relcache hashtable, but as long as we've got the function
we may as well use it wherever it's applicable.
indexes. Replace all heap_openr and index_openr calls by heap_open
and index_open. Remove runtime lookups of catalog OID numbers in
various places. Remove relcache's support for looking up system
catalogs by name. Bulky but mostly very boring patch ...
indexes. Extend the macros in include/catalog/*.h to carry the info
about hand-assigned OIDs, and adjust the genbki script and bootstrap
code to make the relations actually get those OIDs. Remove the small
number of RelOid_pg_foo macros that we had in favor of a complete
set named like the catname.h and indexing.h macros. Next phase will
get rid of internal use of names for looking up catalogs and indexes;
but this completes the changes forcing an initdb, so it looks like a
good place to commit.
Along the way, I made the shared relations (pg_database etc) not be
'bootstrap' relations any more, so as to reduce the number of hardwired
entries and simplify changing those relations in future. I'm not
sure whether they ever really needed to be handled as bootstrap
relations, but it seems to work fine to not do so now.
avoid encroaching on the 'user' range of OIDs by allowing automatic
OID assignment to use values below 16k until we reach normal operation.
initdb not forced since this doesn't make any incompatible change;
however a lot of stuff will have different OIDs after your next initdb.
be supported for all datatypes. Add CREATE AGGREGATE and pg_dump support
too. Add specialized min/max aggregates for bpchar, instead of depending
on text's min/max, because otherwise the possible use of bpchar indexes
cannot be recognized.
initdb forced because of catalog changes.
into indexscans on matching indexes. For the moment, it only handles
int4 and text datatypes; next step is to add a column to pg_aggregate
so that all MIN/MAX aggregates can be handled. Per my recent proposal.
deferred triggers: either one can create more work for the other,
so we have to loop till it's all gone. Per example from andrew@supernews.
Add a regression test to help spot trouble in this area in future.
the long-term plan for this behavior for quite some time, but it is only
possible now that DELETE has a USING clause so that the user can join
other tables in a DELETE statement without relying on this behavior.
in UPDATE. We also now issue a NOTICE if a query has _any_ implicit
range table entries -- in the past, we would only warn about implicit
RTEs in SELECTs with at least one explicit RTE.
As a result of the warning change, 25 of the regression tests had to
be updated. I also took the opportunity to remove some bogus whitespace
differences between some of the float4 and float8 variants. I believe
I have correctly updated all the platform-specific variants, but let
me know if that's not the case.
Original patch for DELETE ... USING from Euler Taveira de Oliveira,
reworked by Neil Conway.
functions. This patch optimizes int2_sum(), int4_sum(), float4_accum()
and float8_accum() to avoid needing to copy the transition function's
state for each input tuple of the aggregate. In an extreme case
(e.g. SELECT sum(int2_col) FROM table where table has a single column),
it improves performance by about 20%. For more complex queries or tables
with wider rows, the relative performance improvement will not be as
significant.
few palloc's. I also chose to eliminate the restype and restypmod fields
entirely, since they are redundant with information stored in the node's
contained expression; re-examining the expression at need seems simpler
and more reliable than trying to keep restype/restypmod up to date.
initdb forced due to change in contents of stored rules.
performance hack Tom introduced recently. This means we can avoid
copying the transition array for each input tuple if these functions
are invoked as aggregate transition functions.
To test the performance improvement, I created a 1 million row table
with a single int4 column. Without the patch, SELECT avg(col) FROM
table took about 4.2 seconds (after the data was cached); with the
patch, it took about 3.2 seconds. Naturally, the performance
improvement for a less trivial query (or a table with wider rows)
would be relatively smaller.
cases with binary-compatible relabeling. My first try was implicitly
assuming that all operators scalarineqsel is used for have binary-
compatible datatypes on both sides ... which is very wrong of course.
Per report from Michael Fuhr.
change saves a great deal of space in pg_proc and its primary index,
and it eliminates the former requirement that INDEX_MAX_KEYS and
FUNC_MAX_ARGS have the same value. INDEX_MAX_KEYS is still embedded
in the on-disk representation (because it affects index tuple header
size), but FUNC_MAX_ARGS is not. I believe it would now be possible
to increase FUNC_MAX_ARGS at little cost, but haven't experimented yet.
There are still a lot of vestigial references to FUNC_MAX_ARGS, which
I will clean up in a separate pass. However, getting rid of it
altogether would require changing the FunctionCallInfoData struct,
and I'm not sure I want to buy into that.
really ought to run before canonicalize_qual, because it can now produce
forms that canonicalize_qual knows how to improve (eg, NOT clauses).
Also, because eval_const_expressions already knows about flattening
nested ANDs and ORs into N-argument form, the initial flatten_andors
pass in canonicalize_qual is now completely redundant and can be
removed. This doesn't save a whole lot of code, but the time and
palloc traffic eliminated is a useful gain on large expression trees.
access: define new index access method functions 'amgetmulti' that can
fetch multiple TIDs per call. (The functions exist but are totally
untested as yet.) Since I was modifying pg_am anyway, remove the
no-longer-needed 'rel' parameter from amcostestimate functions, and
also remove the vestigial amowner column that was creating useless
work for Alvaro's shared-object-dependencies project.
Initdb forced due to changes in pg_am.
binary-compatible relabeling of one or both operands. examine_variable
should avoid stripping RelabelType from non-variable expressions, so that
they will continue to have the correct type; and convert_to_scalar should
just use that type and ignore the other input type. This isn't perfect
but it beats failing entirely. Per example from Michael Fuhr.
when a zero-month interval is given. Per discussion with Karel.
Also, some desultory const-labeling of constant tables. More could be
done along that line.
when open references remain during normal cleanup of a resource owner.
This restores the system's ability to warn about leaks to what it was
before 8.0. Not really a user-level bug, but helpful for development.
currently does. This is now the default Win32 wal sync method because
we perfer o_datasync to fsync.
Also, change Win32 fsync to a new wal sync method called
fsync_writethrough because that is the behavior of _commit, which is
what is used for fsync on Win32.
Backpatch to 8.0.X.
up-to-speed logic; in particular this will cause it to quote names that
match keywords. Remove unnecessary multibyte cruft from quote_literal
(all backend-internal encodings are 8-bit-safe).
is still alive. This improves our odds of not getting fooled by an
unrelated process when checking a stale lock file. Other checks already
in place, plus one newly added in checkDataDir(), ensure that we cannot
attempt to usurp the place of a postmaster belonging to a different userid,
so there is no need to error out. Add comments indicating the importance
of these other checks.
locale is C.
Backpatch to 8.0.X because some operating systems were throwing errors
for such operations, rather than ignoring the locale when it was C.
can tell whether it is being used as an aggregate or not. This allows
such a function to avoid re-pallocing a pass-by-reference transition
value; normally it would be unsafe for a function to scribble on an input,
but in the aggregate case it's safe to reuse the old transition value.
Make int8inc() do this. This gets a useful improvement in the speed of
COUNT(*), at least on narrow tables (it seems to be swamped by I/O when
the table rows are wide). Per a discussion in early December with
Neil Conway. I also fixed int_aggregate.c to check this, thereby
turning it into something approaching a supportable technique instead
of being a crude hack.
elog if the former has trouble writing its file. Code review for
Magnus' patch to redirect stderr to syslog on Windows (Bruce's version
seems right, but did some minor prettification).
Backpatch both changes to 8.0 branch.
Document use of macros for pg_printf functions.
Bump major versions of all interfaces to handle movement of get_progname
from libpq to libpgport in 8.0, and probably other libpgport changes in 8.1.
number of palloc calls. This has a salutory impact on plpgsql operations
with record variables (which create and destroy tupdescs constantly)
and probably helps a bit in some other cases too.
on-the-fly, and thereby avoid blowing out memory when the planner has
underestimated the hash table size. Hash join will now obey the
work_mem limit with some faithfulness. Per my recent proposal
(hash aggregate part isn't done yet though).
the freelist, plus per-buffer spinlocks that protect access to individual
shared buffer headers. This requires abandoning a global freelist (since
the freelist is a global contention point), which shoots down ARC and 2Q
as well as plain LRU management. Adopt a clock sweep algorithm instead.
Preliminary results show substantial improvement in multi-backend situations.
in favor of looking at the flat file copy of pg_database during backend
startup. This should finally eliminate the various corner cases in which
backend startup fails unexpectedly because it isn't able to distinguish
live and dead tuples in pg_database. Simplify locking on pg_database
to be similar to the rules used with pg_shadow and pg_group, and eliminate
FlushRelationBuffers operations that were used only to reduce the odds
of failure of GetRawDatabaseInfo.
initdb forced due to addition of a trigger to pg_database.
implement the md5() SQL-level function). The old code did the
following:
1. de-toast the datum
2. convert it to a cstring via textout()
3. get the length of the cstring via strlen()
Since we are treating the datum context as a blob of binary data,
the latter two steps are unnecessary. Once the data has been
detoasted, we can just use it as-is, and derive its length from
the varlena metadata.
This patch improves some run-of-the-mill md5() computations by
just under 10% in my limited tests, and passes the regression tests.
I also noticed that md5_text() wasn't checking the return value
of md5_hash(); encountering OOM at precisely the right moment
could result in returning a random md5 hash. This patch corrects
that. A better fix would be to make md5_hash() only return on
success (and/or allocate via palloc()), but since it's used in
the frontend as well I don't see an easy way to do that.
during flat-file writing. The only difference is that SnapshotSelf
would consider tuples of the 'current command' within the current
transaction as valid, where SnapshotNow wouldn't. We can eliminate
the need for this with one extra CommandCounterIncrement call before
we start reading the catalogs.
the AMI_OVERRIDE flag. The fact that TransactionLogFetch treats
BootstrapTransactionId as always committed is sufficient to make
bootstrap work, and getting rid of extra tests in heavily used code
paths seems like a win. The files produced by initdb are demonstrably
the same after this change.
file now identifies group members by usesysid not name; this avoids
needing to depend on SearchSysCache which we can't use during startup.
(The old representation was entirely broken anyway, since we did not
regenerate the file following RENAME USER.) It's only a 95% solution
because if the group membership list is big enough to be toasted out
of line, we cannot read it during startup. I think this will do for
the moment, until we have time to implement the planned pg_role
replacement for pg_group.
in GetNewTransactionId(). Since the limit value has to be computed
before we run any real transactions, this requires adding code to database
startup to scan pg_database and determine the oldest datfrozenxid.
This can conveniently be combined with the first stage of an attack on
the problem that the 'flat file' copies of pg_shadow and pg_group are
not properly updated during WAL recovery. The code I've added to
startup resides in a new file src/backend/utils/init/flatfiles.c, and
it is responsible for rewriting the flat files as well as initializing
the XID wraparound limit value. This will eventually allow us to get
rid of GetRawDatabaseInfo too, but we'll need an initdb so we can add
a trigger to pg_database.
estimate to less than the number of values estimated for any one grouping
Var, as suggested by Manfred. This is intuitively right, and what's
more it puts the plan choices in the subselect regression test back the
way they were before ...
clamp the estimated number of groups to table row count over 10, instead
of table row count; this reflects a heuristic that people probably won't
group over a near-unique set of columns, and the knowledge that we don't
currently have any way to estimate the correlation of the columns better
than guessing. This change creates a trivial plan change in one of the
regression tests.
to avoid problems when a cursor depends on objects created or changed in
the same subtransaction. We'd like to do better someday, but this seems
the only workable answer for 8.0.1.
got it wrong when the JOIN was in an outer query level. Per example from
Laurie Burrow. Also fix same issue in markTargetListOrigin. I think the
latter is only a latent bug since we currently don't apply markTargetListOrigin
except at the outer level ... but should do it right anyway.
so that we can get the size of a shared inval message back down to what it
was in 7.4 (and simplify the logic too). Phase 2 of fixing the
'SMgrRelation hashtable corrupted' problem.
is the minimum required fix. I want to look next at taking advantage of
it by simplifying the message semantics in the shared inval message queue,
but that part can be held over for 8.1 if it turns out too ugly.
releases, a nonzero 'c' argument meant that the input string could be
terminated by either that character or \0. Recent refactoring broke
that, causing the thing to scan for 'c' only. This went undetected
because no part of the main code actually passes nonzero 'c'. However
it broke tsearch2 and possibly other user-written code that assumed
the old definition. Per report from Tom Hebbron.
Also performed an initial run through of upgrading our Copyright date to
extend to 2005 ... first run here was very simple ... change everything
where: grep 1996-2004 && the word 'Copyright' ... scanned through the
generated list with 'less' first, and after, to make sure that I only
picked up the right entries ...
(rd_att) field of a nailed-in-cache relcache entry. This fixes the bug
reported by Alvaro 8-Dec-2004; I believe it probably also explains
Grant Finnemore's report of 10-Sep-2004.
In an unrelated change in the same file, put back 7.4's response to
failure to rename() the relcache init file, ie, unlink the useless
temp file. I did not put back the warning message, since there might
actually be some reason not to have that.
of an inheritance child table is binary-compatible with the rowtype of
its parent, invent an expression node type that does the conversion
correctly. Fixes the new bug exhibited by Kris Shannon as well as a
lot of old bugs that would only show up when using multiple inheritance
or after altering the parent table.
the "ps" argument list on Unix - meaning that there is no way to
identify for example the stats processors or the bgwriter.
This patch adds this functionality, in a bit of a crufty way. It creates
a kernel Event object with the name of what would be in the title. This
can be viewed using for example Process Explorer.
It's been very handy for me during both debugging and using. I haven't
figured a better way, but perhaps someone has one that's less crufty? If
not, here is at least a working patch :-)
Magnus Hagander
escapes --- they aren't simply quoted characters. Problem noted by
Antti Salmela. Also fix problem with incorrect handling of multibyte
characters when followed by a quantifier.
error conditions during regexp compile, but not during regexp execution;
any sort of "can't happen" errors would be treated as no-match instead
of being reported as they should be. Noticed while trying to duplicate
a reported Tcl bug.
to be processed by GUC before InitPostgres, because any required lookup
of the encoding conversion function has to be done during InitializeClientEncoding.
So, I broke this last week by moving GUC processing to after InitPostgres :-(.
What we can do as a compromise is process non-SUSET variables during
command line scanning (the same as before), and postpone the processing
of only SUSET variables. None of the SUSET variables need to be set
before InitPostgres.
fill factor has been exceeded. We usually run with ffactor == 1, but
the way the test was coded, it wouldn't split a bucket until the actual
fill factor reached 2.0, because of use of integer division. Change
from > to >= so that it will split more aggressively when the table
starts to get full.
few cycles during transaction exit. A typical session probably
wouldn't have as many as half a dozen portals open at once, so the
original value of 64 seems far larger than needed.
plain SUSET instead. Also delay processing of options received in
client connection request until after we know if the user is a superuser,
so that SUSET values can be set that way by legitimate superusers.
Per recent discussion.
estimates when combining the estimates for a range query. As pointed out
by Miquel van Smoorenburg, the existing check for an impossible combined
result would quite possibly fail to detect one default and one non-default
input. It seems better to use the default range query estimate in such
cases. To do so, add a check for an estimate of exactly DEFAULT_INEQ_SEL.
This is a bit ugly because it introduces additional coupling between
clauselist_selectivity and scalarltsel/scalargtsel, but it's not like
there wasn't plenty already...
clause implicitly whenever one is not given explicitly. Remove concept
of a schema having an associated tablespace, and simplify the rules for
selecting a default tablespace for a table or index. It's now just
(a) explicit TABLESPACE clause; (b) default_tablespace if that's not an
empty string; (c) database's default. This will allow pg_dump to use
SET commands instead of tablespace clauses to determine object locations
(but I didn't actually make it do so). All per recent discussions.
to DAY precision or coarser; leave the timezone alone when precision is
HOUR or less. This avoids surprises for inputs near a DST transition
time, as per example from Matthew Gabeler-Lee. (The only reason we
recalculate at all is so that outputs that are supposed to represent
days will come out as local midnight, and that's not relevant for sub-day
precision.)
use it, as per my proposal of yesterday. This gives us a means of
determining the zone offset to impute to an unlabeled timestamp that
is both efficient and reliable, unlike all our previous tries involving
mktime() and localtime(). The behavior for invalid or ambiguous times
at a DST transition is fixed to be really and truly "assume standard
time", fixing a bug that has come and gone repeatedly but was back
again in 7.4. (There is some ongoing discussion about whether we should
raise an error instead, but for the moment I'll make it do what it was
previously intended to do.)
of HeapTupleSatisfiesItself() to trigger a hint-bit update on the tuple:
if the row was updated or deleted by a subtransaction of my own transaction
that was later rolled back. This cannot occur in pre-8.0 of course, so
the hint-bit patch applied a couple weeks ago is OK for existing releases.
But for 8.0 it seems we had better fix things so that RI_FKey_check can
pass the correct buffer number to HeapTupleSatisfiesItself. Accordingly,
add fields to the TriggerData struct to carry the buffer ID(s) for the
old and new tuple(s). There are other possible solutions but this one
seems cleanest; it will allow other AFTER-trigger functions to safely
do tqual.c calls if they want to. Put new fields at end of struct so
that there is no API breakage.
try to display it as a reference to the underlying column instead. This
is a legitimate substitution (it wouldn't be for a named join) and it
fixes some cases where the display would otherwise be ambiguous. Per
example from Sim Zacks.
returning a NULL pointer (some callers remembered to check the return
value, but some did not -- it is safer to just bail out).
Also, cleanup pgstat.c to use elog(ERROR) rather than elog(LOG) followed
by exit().
examinable by non-superusers, and use it to protect the recently-added
GUC variables for data directory and config files. For now I have only
flagged those variables that could be used to deduce something about
the server's filesystem layout, but possibly we should also mark vars
related to logging settings and other admin-only information?
to make life cushy for the JDBC driver. Centralize the decision-making
that affects this by inventing a get_type_func_class() function, rather
than adding special cases in half a dozen places.
This does not disable the bgwriter process: it still has to wake up often
enough to collect fsync requests from backends in a timely fashion. But
it responds to the recent gripe about not being able to prevent the disk
from being spun up constantly.
only covered the case of assigning "", and failed to recognize that
actually setlocale(LC_MESSAGES,...) does not work at all on this platform.
Magnus Hagander, some code prettification by Tom Lane.
pins at end of transaction, and reduce AtEOXact_Buffers to an Assert
cross-check that this was done correctly. When not USE_ASSERT_CHECKING,
AtEOXact_Buffers is a complete no-op. This gets rid of an O(NBuffers)
bottleneck during transaction commit/abort, which recent testing has shown
becomes significant above a few tens of thousands of shared buffers.
http://archives.postgresql.org/pgsql-hackers/2004-10/msg00464.php.
This fix is intended to be permanent: it moves the responsibility for
calling SetBufferCommitInfoNeedsSave() into the tqual.c routines,
eliminating the requirement for callers to test whether t_infomask changed.
Also, tighten validity checking on buffer IDs in bufmgr.c --- several
routines were paranoid about out-of-range shared buffer numbers but not
about out-of-range local ones, which seems a tad pointless.
- remove another senseless "extern" keyword that was applied to a
function definition
- change a foo more function signatures from "some_type foo()" to
"some_type foo(void)"
- rewrite another K&R style function definition
- make the type of the "action" function pointer in the KeyWord struct
in src/backend/utils/adt/formatting.c more precise
- replace some function signatures of the form "some_type foo()" with
"some_type foo(void)"
- replace a few instances of a literal 0 being used as a NULL pointer;
there are more instances of this in the code, but I just fixed a few
- in src/backend/utils/mb/wstrncmp.c, replace K&R style function
declarations with ANSI style, remove use of 'register' keyword
- remove an "extern" modifier that was applied to a function definition
(rather than a declaration)
The vars are renamed to data_directory, config_file, hba_file, and
ident_file, and are guaranteed to be set to accurate absolute paths
during postmaster startup.
This commit does not yet do anything about hiding path values from
non-superusers.
Refactor code into something reasonably understandable, cause
use of the feature to not fail in standalone backends or in
EXEC_BACKEND case, fix sloppy guc.c table entries, make the
documentation minimally usable.
list elements comma-separated instead of barfing. This allows elimination
of half a dozen redundant copies of that behavior, and also makes the
world safe again for pg_get_expr() applied to pg_index.indexprs, per gripe
from Alexander Zhiltsov.
bigint variants). Clean up some inconsistencies in error message wording.
Fix scanint8 to allow trailing whitespace in INT64_MIN case. Update
int8-exp-three-digits.out, which seems to have been ignored by the last
couple of people to modify the int8 regression test, and remove
int8-exp-three-digits-win32.out which is thereby exposed as redundant.
This seems the cleanest way of fixing its lack of a shutdown callback,
which was preventing it from working correctly in a query that didn't
run it to completion. Per bug report from Szima GÄbor.
must be stale. Tweak example startup scripts to not use pg_ctl but launch
the postmaster directly, thereby ensuring that only the postmaster's direct
parent shell will be a postgres-owned process. In combination these should
fix the longstanding problem of the postmaster sometimes refusing to start
during reboot because it thinks the old lockfile is not stale.
(1) Replace while loop with the new forboth() construct in
parser/analyze.c
(2) Replace lcons() with lappend() in SearchCatCacheList(). Since these
now have the same performance, there is no reason to prefer lcons() in
this case, and using lappend() leads to cleaner code.
(3) Improve the name of the second parameter to for_each_cell()
and hopefully improve code clarity while at it. One intentional
semantics change: a backslashed space will not be treated as removable
trailing whitespace, as the prior coding would do. ISTM that if it
wouldn't be considered removable leading whitespace, it shouldn't be
stripped at the end either.
setting is valid must ignore that state and permit the assignment anyway
when source is PGC_S_OVERRIDE. Otherwise they may disallow a rollback
at transaction abort, which is The Wrong Thing. Per example from
Michael Fuhr 12-Sep-04.
pg_subtrans --- what we need is the oldest xmin of any snapshot in use
in the current top transaction. Introduce a new variable TransactionXmin
to play this role. Fixes intermittent regression failure reported by
Neil Conway.
as per recent discussions. Invent SubTransactionIds that are managed like
CommandIds (ie, counter is reset at start of each top transaction), and
use these instead of TransactionIds to keep track of subtransaction status
in those modules that need it. This means that a subtransaction does not
need an XID unless it actually inserts/modifies rows in the database.
Accordingly, don't assign it an XID nor take a lock on the XID until it
tries to do that. This saves a lot of overhead for subtransactions that
are only used for error recovery (eg plpgsql exceptions). Also, arrange
to release a subtransaction's XID lock as soon as the subtransaction
exits, in both the commit and abort cases. This avoids holding many
unique locks after a long series of subtransactions. The price is some
additional overhead in XactLockTableWait, but that seems acceptable.
Finally, restructure the state machine in xact.c to have a more orthogonal
set of states for subtransactions.
mode see a fresh snapshot for each command in the function, rather than
using the latest interactive command's snapshot. Also, suppress fresh
snapshots as well as CommandCounterIncrement inside STABLE and IMMUTABLE
functions, instead using the snapshot taken for the most closely nested
regular query. (This behavior is only sane for read-only functions, so
the patch also enforces that such functions contain only SELECT commands.)
As per my proposal of 6-Sep-2004; I note that I floated essentially the
same proposal on 19-Jun-2002, but that discussion tailed off without any
action. Since 8.0 seems like the right place to be taking possibly
nontrivial backwards compatibility hits, let's get it done now.
rather than when returning to the idle loop. This makes no particular
difference for interactively-issued queries, but it makes a big difference
for queries issued within functions: trigger execution now occurs before
the calling function is allowed to proceed. This responds to numerous
complaints about nonintuitive behavior of foreign key checking, such as
http://archives.postgresql.org/pgsql-bugs/2004-09/msg00020.php, and
appears to be required by the SQL99 spec.
Also take the opportunity to simplify the data structures used for the
pending-trigger list, rename them for more clarity, and squeeze out a
bit of space.
Asserts would lead to a server core dump if an error occurred while
trying to abort a failed subtransaction (thereby leading to re-execution
of whatever parts of AbortSubTransaction had already run). This of course
does not prevent such an error from creating an infinite loop, but at
least we don't make the situation worse. Responds to an open item on
the subtransactions to-do list.
elog() emulation code always calls errstart with ERROR error level.
This means that a recursive error call triggered by elog would do
MemoryContextReset(ErrorContext), whether or not this was actually
appropriate. I'm surprised we haven't seen this in the field...
Messages of less than ERROR severity should never be promoted (this
fixes Gaetano Mendola's problem with a COMMERROR becoming a PANIC,
and is obvious in hindsight anyway). Do all promotion in errstart
not errfinish, to ensure that output decisions are made correctly;
the former coding could suppress logging of promoted errors, which
doesn't seem like a good idea. Eliminate some redundant code too.
use of already-freed strings, other silliness. Also fix reporting of
config file syntax errors so that it actually works reasonably well
(eg, points at the correct line). Use palloc instead of malloc for
temporary storage to reduce code clutter.
not supposed to (fixes problem with postmaster aborting due to mistaken
postgresql.conf change); don't call superuser() when not inside a
transaction (fixes coredump when, eg, try to set log_statement from
PGOPTIONS); some message style guidelines enforcement.
to allow DBA to choose the form in which log filenames reflect the
current time. Also allow for truncating instead of appending to
pre-existing files --- this is convenient when the log filename pattern
rewrites the same names cyclically. Per Ed L.
relcache entries. Also, change TransactionIdIsCurrentTransactionId()
so that if consulted during transaction abort, it will not say that
the aborted xact is still current. (It would be better to ensure that
it's never called at all during abort, but I'm not sure we can easily
guarantee that.) In combination, these fix a crash we have seen
occasionally during parallel regression tests of 8.0.
from being accepted after the outer right brace. Per report from
Markus Bertheau.
Also add regression test cases for this change, and for previous
recent array literal parser changes.
PROCLOCK structs in shared memory now have only a bitmask for held
locks, rather than counts (making them 40 bytes smaller, which is a
good thing). Multiple locks within a transaction are counted in the
local hash table instead, and we have provision for tracking which
ResourceOwner each count belongs to. Solves recently reported problem
with memory leakage within long transactions.
for every command executed within a transaction. For long transactions
this was a significant memory leak. Instead, we can delete a portal's
or subtransaction's ResourceOwner immediately, if we physically transfer
the information about its locks up to the parent owner. This does not
fully solve the leak problem; we need to do something about counting
multiple acquisitions of the same lock in order to fix it. But it's a
necessary step along the way.
RecentXmin (== MyProc->xmin). This ensures that it will be safe to
truncate pg_subtrans at RecentGlobalXmin, which should largely eliminate
any fear of bloat. Along the way, eliminate SubTransXidsHaveCommonAncestor,
which isn't really needed and could not give a trustworthy result anyway
under the lookback restriction.
In an unrelated but nearby change, #ifdef out GetUndoRecPtr, which has
been dead code since 2001 and seems unlikely to ever be resurrected.
> why does CVS tip still give me
>
> regression=# select extract(century from now());
> date_part
> -----------
> 20
> (1 row)
> [ ... looks in code ... ]
>
> Apparently it's because you fixed only timestamp_part, and not
> timestamptz_part. I'm not too sure about what timestamp_trunc or
> timestamptz_trunc should do, but they may be wrong as well.
Sigh... as usual, what is not tested does not work:-(
> Could we have a more complete patch?
Please find a submission attached. I hope it really fixes all decade,
century and millenium issues for extract and *_trunc functions on
interval
and other timestamp types. If someone could check that the results
are reasonnable, it would be great.
I indeed overlooked the fact that there were two functions. The patch
fixes the code so that both variants agree.
I added comments to interval extractions, because it relies on the C
division to have a negative remainder: -7/10 = 0 and remains -7.
As for *_trunc functions, I have chosen to put the first year of the
century or millennium: -100, 1, 101... 1001 2001 etc. Indeed, I don't
think it would make sense to put 2000 (last year of the 2nd millennium)
for rounding all years of the third millenium.
I also fixed the code so that all decades last 10 years and decade 199
means the 1990's.
I have added some tests that are relevant to deal with tricky cases. The
formula may be simplified, but all these cases must pass. Please keep
them.
Fabien Coelho
presence of dropped columns. Document the already-presumed fact that
eref aliases in relation RTEs are supposed to have entries for dropped
columns; cause the user alias structs to have such entries too, so that
there's always a one-to-one mapping to the underlying physical attnums.
Adjust expandRTE() and related code to handle the case where a column
that is part of a JOIN has been dropped. Generalize expandRTE()'s API
so that it can be used in a couple of places that formerly rolled their
own implementation of the same logic. Fix ruleutils.c to suppress
display of aliases for columns that were dropped since the rule was made.
to the physical layout of the rowtype, ie, there are dummy arguments
corresponding to any dropped columns in the rowtype. We formerly had a
couple of places that did it this way and several others that did not.
Fixes Gaetano Mendola's "cache lookup failed for type 0" bug of 5-Aug.