Commit Graph

4213 Commits

Author SHA1 Message Date
Tom Lane
bd0a260928 Make CREATE/DROP/RENAME DATABASE wait a little bit to see if other backends
will exit before failing because of conflicting DB usage.  Per discussion,
this seems a good idea to help mask the fact that backend exit takes nonzero
time.  Remove a couple of thereby-obsoleted sleeps in contrib and PL
regression test sequences.
2007-06-01 19:38:07 +00:00
Tom Lane
bd2c980b22 Buy back some of the cycles spent in more-expensive hash functions by
selecting power-of-2, rather than prime, numbers of buckets in hash joins.
If the hash functions are doing their jobs properly by making all hash bits
equally random, this is good enough, and it saves expensive integer division
and modulus operations.
2007-06-01 17:38:44 +00:00
Tom Lane
1f559b7d3a Fix several hash functions that were taking chintzy shortcuts instead of
delivering a well-randomized hash value.  I got religion on this after
observing that performance of multi-batch hash join degrades terribly if the
higher-order bits of hash values aren't random, as indeed was true for say
hashes of small integer values.  It's now expected and documented that hash
functions should use hash_any or some comparable method to ensure that all
bits of their output are about equally random.

initdb forced because this change invalidates existing hash indexes.  For the
same reason, this isn't back-patchable; the hash join performance problem
will get a band-aid fix in the back branches.
2007-06-01 15:33:19 +00:00
Tom Lane
10f719af33 Change build_index_pathkeys() so that the expressions it builds to represent
index key columns always have the type expected by the index's associated
operators, ie, we add RelabelType nodes when dealing with binary-compatible
index opclasses.  This is needed to get varchar indexes to play nicely with
the new EquivalenceClass machinery, as per recent gripe from Josh Berkus that
CVS HEAD was failing to match a varchar index column to a constant restriction
in the query.

It seems likely that this change will allow removal of a lot of ugly ad-hoc
RelabelType-stripping that the planner has traditionally done while matching
expressions to other expressions, but I'll worry about that some other day.
2007-05-31 16:57:34 +00:00
Tom Lane
d526575f89 Make large sequential scans and VACUUMs work in a limited-size "ring" of
buffers, rather than blowing out the whole shared-buffer arena.  Aside from
avoiding cache spoliation, this fixes the problem that VACUUM formerly tended
to cause a WAL flush for every page it modified, because we had it hacked to
use only a single buffer.  Those flushes will now occur only once per
ring-ful.  The exact ring size, and the threshold for seqscans to switch into
the ring usage pattern, remain under debate; but the infrastructure seems
done.  The key bit of infrastructure is a new optional BufferAccessStrategy
object that can be passed to ReadBuffer operations; this replaces the former
StrategyHintVacuum API.

This patch also changes the buffer usage-count methodology a bit: we now
advance usage_count when first pinning a buffer, rather than when last
unpinning it.  To preserve the behavior that a buffer's lifetime starts to
decrease when it's released, the clock sweep code is modified to not decrement
usage_count of pinned buffers.

Work not done in this commit: teach GiST and GIN indexes to use the vacuum
BufferAccessStrategy for vacuum-driven fetches.

Original patch by Simon, reworked by Heikki and again by Tom.
2007-05-30 20:12:03 +00:00
Tom Lane
14c4d3dea9 Fix trivial misspelling in comment. 2007-05-30 16:16:32 +00:00
Neil Conway
6af04882de Fix a bug in input processing for the "interval" type. Previously,
"microsecond" and "millisecond" units were not considered valid input
by themselves, which caused inputs like "1 millisecond" to be rejected
erroneously.

Update the docs, add regression tests, and backport to 8.2 and 8.1
2007-05-29 04:58:43 +00:00
Tom Lane
97d12b434f Ooops, I was too busy worrying about getting the transactional infrastructure
right to think carefully about how insert and delete counts map to
n_live_tuples.  Of course a deletion should reduce n_live_tuples.
2007-05-27 17:28:36 +00:00
Tom Lane
8d675c85c5 pgstat's on-proc-exit hook has to execute after the last transaction commit
or abort within a backend; rearrange InitPostgres processing to make it so.
Revealed by just-added Asserts along with ECPG regression tests (hm, I wonder
why the core regression tests didn't expose it?).  This possibly is another
reason for missing stats updates ...
2007-05-27 05:37:50 +00:00
Tom Lane
77947c51c0 Fix up pgstats counting of live and dead tuples to recognize that committed
and aborted transactions have different effects; also teach it not to assume
that prepared transactions are always committed.

Along the way, simplify the pgstats API by tying counting directly to
Relations; I cannot detect any redeeming social value in having stats
pointers in HeapScanDesc and IndexScanDesc structures.  And fix a few
corner cases in which counts might be missed because the relation's
pgstat_info pointer hadn't been set.
2007-05-27 03:50:39 +00:00
Tom Lane
604ffd280b Create hooks to let a loadable plugin monitor (or even replace) the planner
and/or create plans for hypothetical situations; in particular, investigate
plans that would be generated using hypothetical indexes.  This is a
heavily-rewritten version of the hooks proposed by Gurjeet Singh for his
Index Advisor project.  In this formulation, the index advisor can be
entirely a loadable module instead of requiring a significant part to be
in the core backend, and plans can be generated for hypothetical indexes
without requiring the creation and rolling-back of system catalog entries.

The index advisor patch as-submitted is not compatible with these hooks,
but it needs significant work anyway due to other 8.2-to-8.3 planner
changes.  With these hooks in the core backend, development of the advisor
can proceed as a pgfoundry project.
2007-05-25 17:54:25 +00:00
Tom Lane
11086f2f2b Repair planner bug introduced in 8.2 by ability to rearrange outer joins:
in cases where a sub-SELECT inserts a WHERE clause between two outer joins,
that clause may prevent us from re-ordering the two outer joins.  The code
was considering only the joins' own ON-conditions in determining reordering
safety, which is not good enough.  Add a "delay_upper_joins" flag to
OuterJoinInfo to flag that we have detected such a clause and higher-level
outer joins shouldn't be permitted to commute with this one.  (This might
seem overly coarse, but given the current rules for OJ reordering, it's
sufficient AFAICT.)

The failure case is actually pretty narrow: it needs a WHERE clause within
the RHS of a left join that checks the RHS of a lower left join, but is not
strict for that RHS (else we'd have simplified the lower join to a plain
join).  Even then no failure will be manifest unless the planner chooses to
rearrange the join order.

Per bug report from Adam Terrey.
2007-05-22 23:23:58 +00:00
Tom Lane
d7153c5fad Fix best_inner_indexscan to return both the cheapest-total-cost and
cheapest-startup-cost innerjoin indexscans, and make joinpath.c consider
both of these (when different) as the inside of a nestloop join.  The
original design was based on the assumption that indexscan paths always
have negligible startup cost, and so total cost is the only important
figure of merit; an assumption that's obviously broken by bitmap
indexscans.  This oversight could lead to choosing poor plans in cases
where fast-start behavior is more important than total cost, such as
LIMIT and IN queries.  8.1-vintage brain fade exposed by an example from
Chuck D.
2007-05-22 01:40:33 +00:00
Tom Lane
2415ad9831 Teach tuplestore.c to throw away data before the "mark" point when the caller
is using mark/restore but not rewind or backward-scan capability.  Insert a
materialize plan node between a mergejoin and its inner child if the inner
child is a sort that is expected to spill to disk.  The materialize shields
the sort from the need to do mark/restore and thereby allows it to perform
its final merge pass on-the-fly; while the materialize itself is normally
cheap since it won't spill to disk unless the number of tuples with equal
key values exceeds work_mem.

Greg Stark, with some kibitzing from Tom Lane.
2007-05-21 17:57:35 +00:00
Peter Eisentraut
3963574d13 XPath fixes:
- Function renamed to "xpath".
 - Function is now strict, per discussion.
 - Return empty array in case when XPath expression detects nothing
   (previously, NULL was returned in such case), per discussion.
 - (bugfix) Work with fragments with prologue: select xpath('/a',
   '<?xml version="1.0"?><a /><b />'); // now XML datum is always wrapped
   with dummy <x>...</x>, XML prologue simply goes away (if any).
 - Some cleanup.

Nikolay Samokhvalov

Some code cleanup and documentation work by myself.
2007-05-21 17:10:29 +00:00
Tom Lane
a8d539f124 To support external compression of archived WAL data, add a flag bit to
WAL records that shows whether it is safe to remove full-page images
(ie, whether or not an on-line backup was in progress when the WAL entry
was made).  Also make provision for an XLOG_NOOP record type that can be
used to fill in the extra space when decompressing the data for restore.

This is the portion of Koichi Suzuki's "full page writes" patch that
has to go into the core database.  The remainder of that work is two
external compression and decompression programs, which for the time being
will undergo separate development on pgfoundry.  Per discussion.

Also, twiddle the handling of BTREE_SPLIT records to ensure it'll be
possible to compress them (the previous coding caused essential info
to be omitted).  The other commonly-used record types seem OK already,
with the possible exception of GIN and GIST WAL records, which I don't
understand well enough to opine on.
2007-05-20 21:08:19 +00:00
Alvaro Herrera
b40776d221 Have CLUSTER advance the table's relfrozenxid. The new frozen point is the
FreezeXid introduced in a recent commit, so there isn't any data loss in this
approach.

Doing it causes ALTER TABLE (or rather, the forms of it that cause a full table
rewrite) to be affected as well.  In this case, the frozen point is RecentXmin,
because after the rewrite all the tuples are relabeled with the rewriting
transaction's Xid.

TOAST tables are fixed automatically as well, as fallout of the way they were
already being handled in the respective code paths.

With this patch, there is no longer need to VACUUM tables for Xid wraparound
purposes that have been cleaned up via TRUNCATE or CLUSTER.
2007-05-18 23:19:42 +00:00
Tom Lane
dbb769352d Temporary fix for the problem that pg_stat_activity, inet_client_addr(),
and inet_server_addr() fail if the client connected over a "scoped" IPv6
address.  In this case getnameinfo() will return a string ending with
a poorly-standardized "%something" zone specifier, which these functions
try to feed to network_in(), which won't take it.  So that we don't lose
functionality altogether, suppress the zone specifier before giving the
string to network_in().  Per report from Brian Hirt.

TODO: probably someday the inet type should support scoped IPv6 addresses,
and then this patch should be reverted.

Backpatch to 8.2 ... is it worth going further?
2007-05-17 23:31:49 +00:00
Tom Lane
b11123b675 Fix parameter recalculation for Limit nodes: during a ReScan call we must
recompute the limit/offset immediately, so that the updated values are
available when the child's ReScan function is invoked.  Add a regression
test for this, too.  Bug is new in HEAD (due to the bounded-sorting patch)
so no need for back-patch.

I did not do anything about merging this signaling with chgParam processing,
but if we were to do that we'd still need to compute the updated values
at this point rather than during the first ProcNode call.

Per observation and test case from Greg Stark, though I didn't use his patch.
2007-05-17 19:35:08 +00:00
Alvaro Herrera
3b0347b36e Move the tuple freezing point in CLUSTER to a point further back in the past,
to avoid losing useful Xid information in not-so-old tuples.  This makes
CLUSTER behave the same as VACUUM as far a tuple-freezing behavior goes
(though CLUSTER does not yet advance the table's relfrozenxid).

While at it, move the actual freezing operation in rewriteheap.c to a more
appropriate place, and document it thoroughly.  This part of the patch from
Tom Lane.
2007-05-17 15:28:29 +00:00
Alvaro Herrera
90cbc63fd1 Have TRUNCATE advance the affected table's relfrozenxid to RecentXmin, to
avoid a later needless VACUUM for Xid-wraparound purposes.  We can do this
since the table is known to be left empty, so no Xid remains on it.

Per discussion.
2007-05-16 17:28:20 +00:00
Bruce Momjian
178214d2ae Update comments for PG_DETOAST_PACKED and VARDATA_ANY on a structures
that require alignment.

Add a paragraph to the "User-Defined Types" chapter on using these
macros since it seems like they're a hit.

Gregory Stark
2007-05-15 17:39:54 +00:00
Tom Lane
9aa3c782c9 Fix the problem that creating a user-defined type named _foo, followed by one
named foo, would work but the other ordering would not.  If a user-specified
type or table name collides with an existing auto-generated array name, just
rename the array type out of the way by prepending more underscores.  This
should not create any backward-compatibility issues, since the cases in which
this will happen would have failed outright in prior releases.

Also fix an oversight in the arrays-of-composites patch: ALTER TABLE RENAME
renamed the table's rowtype but not its array type.
2007-05-12 00:55:00 +00:00
Tom Lane
d8326119c8 Fix my oversight in enabling domains-of-domains: ALTER DOMAIN ADD CONSTRAINT
needs to check the new constraint against columns of derived domains too.

Also, make it error out if the domain to be modified is used within any
composite-type columns.  Eventually we should support that case, but it seems
a bit painful, and not suitable for a back-patch.  For the moment just let the
user know we can't do it.

Backpatch to 8.2, which is the only released version that allows nested
domains.  Possibly the other part should be back-patched further.
2007-05-11 20:17:15 +00:00
Tom Lane
bc8036fc66 Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables).  Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage.  (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)

Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.

David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 17:57:14 +00:00
Tom Lane
5b7cf08d33 Reserve some pg_statistic "kind" codes for use by the ESRI ST_Geometry
datatype project.  Per request from Ale Raza (araza at esri.com).
2007-05-08 19:13:52 +00:00
Neil Conway
ade493e02d Add a hash function for "numeric". Mark the equality operator for
numerics as "oprcanhash", and make the corresponding system catalog
updates. As a result, hash indexes, hashed aggregation, and hash
joins can now be used with the numeric type. Bump the catversion.

The only tricky aspect to doing this is writing a correct hash
function: it's possible for two Numerics to be equal according to
their equality operator, but have different in-memory bit patterns.
To cope with this, the hash function doesn't consider the Numeric's
"scale" or "sign", and explictly skips any leading or trailing
zeros in the Numeric's digit buffer (the current implementation
should suppress any such zeros, but it seems unwise to rely upon
this). See discussion on pgsql-patches for more details.
2007-05-08 18:56:48 +00:00
Tom Lane
d2a4a4069f Add a line to the EXPLAIN ANALYZE output for a Sort node, showing the
actual sort strategy and amount of space used.  By popular demand.
2007-05-04 21:29:53 +00:00
Tom Lane
c7464720a3 tas() support for Renesas' M32R processor. Kazuhiro Inaoka 2007-05-04 15:20:52 +00:00
Tom Lane
79ca7ffeb6 A few fixups in error handling: mark pg_re_throw() as noreturn for gcc,
and for other compilers, insert a dummy exit() call so that they understand
PG_RE_THROW() doesn't return.  Insert fflush(stderr) in ExceptionalCondition,
per recent buildfarm evidence that that might not happen automatically on some
platforms.  And const-ify ExceptionalCondition's declaration while at it.
2007-05-04 02:01:02 +00:00
Tom Lane
d26559dbf3 Teach tuplesort.c about "top N" sorting, in which only the first N tuples
need be returned.  We keep a heap of the current best N tuples and sift-up
new tuples into it as we scan the input.  For M input tuples this means
only about M*log(N) comparisons instead of M*log(M), not to mention a lot
less workspace when N is small --- avoiding spill-to-disk for large M
is actually the most attractive thing about it.  Patch includes planner
and executor support for invoking this facility in ORDER BY ... LIMIT
queries.  Greg Stark, with some editorialization by moi.
2007-05-04 01:13:45 +00:00
Tom Lane
0fef38da21 Tweak hash index AM to use the new ReadOrZeroBuffer bufmgr API when fetching
pages it intends to zero immediately.  Just to show there is some use for that
function besides WAL recovery :-).
Along the way, fold _hash_checkpage and _hash_pageinit calls into _hash_getbuf
and friends, instead of expecting callers to do that separately.
2007-05-03 16:45:58 +00:00
Tom Lane
8c3cc86e7b During WAL recovery, when reading a page that we intend to overwrite completely
from the WAL data, don't bother to physically read it; just have bufmgr.c
return a zeroed-out buffer instead.  This speeds recovery significantly,
and also avoids unnecessary failures when a page-to-be-overwritten has corrupt
page headers on disk.  This replaces a former kluge that accomplished the
latter by pretending zero_damaged_pages was always ON during WAL recovery;
which was OK when the kluge was put in, but is unsafe when restoring a WAL
log that was written with full_page_writes off.

Heikki Linnakangas
2007-05-02 23:18:03 +00:00
Tom Lane
88f1fd2989 Fix oversight in PG_RE_THROW processing: it's entirely possible that there
isn't any place to throw the error to.  If so, we should treat the error
as FATAL, just as we would have if it'd been thrown outside the PG_TRY
block to begin with.

Although this is clearly a *potential* source of bugs, it is not clear
at the moment whether it is an *actual* source of bugs; there may not
presently be any PG_TRY blocks in code that can be reached with no outer
longjmp catcher.  So for the moment I'm going to be conservative and not
back-patch this.  The change breaks ABI for users of PG_RE_THROW and hence
might create compatibility problems for loadable modules, so we should not
put it into released branches without proof that it's needed.
2007-05-02 15:32:42 +00:00
Tom Lane
c432061963 Change the timestamps recorded in transaction commit/abort xlog records
from time_t to TimestampTz representation.  This provides full gettimeofday()
resolution of the timestamps, which might be useful when attempting to
do point-in-time recovery --- previously it was not possible to specify
the stop point with sub-second resolution.  But mostly this is to get
rid of TimestampTz-to-time_t conversion overhead during commit.  Per my
proposal of a day or two back.
2007-04-30 21:01:53 +00:00
Tom Lane
641912b4d1 Fix oversight in my patch of yesterday: forgot to ensure that stats would
still be forced out at backend exit.
2007-04-30 16:37:08 +00:00
Tom Lane
957d08c81f Implement rate-limiting logic on how often backends will attempt to send
messages to the stats collector.  This avoids the problem that enabling
stats_row_level for autovacuum has a significant overhead for short
read-only transactions, as noted by Arjen van der Meijden.  We can avoid
an extra gettimeofday call by piggybacking on the one done for WAL-logging
xact commit or abort (although that doesn't help read-only transactions,
since they don't WAL-log anything).

In my proposal for this, I noted that we could change the WAL log entries
for commit/abort to record full TimestampTz precision, instead of only
time_t as at present.  That's not done in this patch, but will be committed
separately.
2007-04-30 03:23:49 +00:00
Tom Lane
bbbe825f5f Modify processing of DECLARE CURSOR and EXPLAIN so that they can resolve the
types of unspecified parameters when submitted via extended query protocol.
This worked in 8.2 but I had broken it during plancache changes.  DECLARE
CURSOR is now treated almost exactly like a plain SELECT through parse
analysis, rewrite, and planning; only just before sending to the executor
do we divert it away to ProcessUtility.  This requires a special-case check
in a number of places, but practically all of them were already special-casing
SELECT INTO, so it's not too ugly.  (Maybe it would be a good idea to merge
the two by treating IntoClause as a form of utility statement?  Not going to
worry about that now, though.)  That approach doesn't work for EXPLAIN,
however, so for that I punted and used a klugy solution of running parse
analysis an extra time if under extended query protocol.
2007-04-27 22:05:49 +00:00
Tom Lane
a2e923a652 Fix dynahash.c to suppress hash bucket splits while a hash_seq_search() scan
is in progress on the same hashtable.  This seems the least invasive way to
fix the recently-recognized problem that a split could cause the scan to
visit entries twice or (with much lower probability) miss them entirely.
The only field-reported problem caused by this is the "failed to re-find
shared lock object" PANIC in COMMIT PREPARED reported by Michel Dorochevsky,
which was caused by multiply visited entries.  However, it seems certain
that mdsync() is vulnerable to missing required fsync's due to missed
entries, and I am fearful that RelationCacheInitializePhase2() might be at
risk as well.  Because of that and the generalized hazard presented by this
bug, back-patch all the supported branches.

Along the way, fix pg_prepared_statement() and pg_cursor() to not assume
that the hashtables they are examining will stay static between calls.
This is risky regardless of the newly noted dynahash problem, because
hash_seq_search() has never promised to cope with deletion of table entries
other than the just-returned one.  There may be no bug here because the only
supported way to call these functions is via ExecMakeTableFunctionResult()
which will cycle them to completion before doing anything very interesting,
but it seems best to get rid of the assumption.  This affects 8.2 and HEAD
only, since those functions weren't there earlier.
2007-04-26 23:24:46 +00:00
Neil Conway
16efdb5ec7 Rename the newly-added commands for discarding session state.
RESET SESSION, RESET PLANS, and RESET TEMP are now DISCARD ALL,
DISCARD PLANS, and DISCARD TEMP, respectively. This is to avoid
confusion with the pre-existing RESET variants: the DISCARD
commands are not actually similar to RESET. Patch from Marko
Kreen, with some minor editorialization.
2007-04-26 16:13:15 +00:00
Tom Lane
afcf09dd90 Some further performance tweaks for planning large inheritance trees that
are mostly excluded by constraints: do the CE test a bit earlier to save
some adjust_appendrel_attrs() work on excluded children, and arrange to
use array indexing rather than rt_fetch() to fetch RTEs in the main body
of the planner.  The latter is something I'd wanted to do for awhile anyway,
but seeing list_nth_cell() as 35% of the runtime gets one's attention.
2007-04-21 21:01:45 +00:00
Peter Eisentraut
b7edb568bd Make configuration parameters fall back to their default values when they
are removed from the configuration file.

Joachim Wieland
2007-04-21 20:02:41 +00:00
Tom Lane
9d37c038fc Repair PANIC condition in hash indexes when a previous index extension attempt
failed (due to lock conflicts or out-of-space).  We might have already
extended the index's filesystem EOF before failing, causing the EOF to be
beyond what the metapage says is the last used page.  Hence the invariant
maintained by the code needs to be "EOF is at or beyond last used page",
not "EOF is exactly the last used page".  Problem was created by my patch
of 2006-11-19 that attempted to repair bug #2737.  Since that was
back-patched to 7.4, this needs to be as well.  Per report and test case
from Vlastimil Krejcir.
2007-04-19 20:24:04 +00:00
Alvaro Herrera
ef23a77441 Enable configurable log of autovacuum actions. Initial patch from Simon
Riggs, additional code and docs by me.  Per discussion.
2007-04-18 16:44:18 +00:00
Magnus Hagander
de9effb55f Enable IPV6 for all MSVC builds, including the VC6 libpq-only build.
Per request from Hiroshi Saito.
2007-04-16 18:39:19 +00:00
Alvaro Herrera
e2a186b03c Add a multi-worker capability to autovacuum. This allows multiple worker
processes to be running simultaneously.  Also, now autovacuum processes do not
count towards the max_connections limit; they are counted separately from
regular processes, and are limited by the new GUC variable
autovacuum_max_workers.

The launcher now has intelligence to launch workers on each database every
autovacuum_naptime seconds, limited only on the max amount of worker slots
available.

Also, the global worker I/O utilization is limited by the vacuum cost-based
delay feature.  Workers are "balanced" so that the total I/O consumption does
not exceed the established limit.  This part of the patch was contributed by
ITAGAKI Takahiro.

Per discussion.
2007-04-16 18:30:04 +00:00
Tom Lane
42dc4b66e6 Make plancache store cursor options so it can pass them to planner during
a replan.  I had originally thought this was not necessary, but the new
SPI facilities create a path whereby queries planned with non-default
options can get into the cache, so it is necessary.
2007-04-16 18:21:07 +00:00
Tom Lane
f01b196597 Support scrollable cursors (ie, 'direction' clause in FETCH) in plpgsql.
Pavel Stehule, reworked a bit by Tom.
2007-04-16 17:21:24 +00:00
Tom Lane
66888f7424 Expose more cursor-related functionality in SPI: specifically, allow
access to the planner's cursor-related planning options, and provide new
FETCH/MOVE routines that allow access to the full power of those commands.
Small refactoring of planner(), pg_plan_query(), and pg_plan_queries()
APIs to make it convenient to pass the planning options down from SPI.

This is the core-code portion of Pavel Stehule's patch for scrollable
cursor support in plpgsql; I'll review and apply the plpgsql changes
separately.
2007-04-16 01:14:58 +00:00
Tom Lane
fa92d21a48 Avoid running build_index_pathkeys() in situations where there cannot
possibly be any useful pathkeys --- to wit, queries with neither any
join clauses nor any ORDER BY request.  It's nearly free to check for
this case and it saves a useful fraction of the planning time for simple
queries.
2007-04-15 20:09:28 +00:00