fixed incorrect initial setting of StartUpID. The logic in XLogWrite()
expects that Write->curridx is advanced to the next page as soon as
LogwrtResult points to the end of the current page, but StartupXLOG()
failed to make that happen when the old WAL ended exactly on a page
boundary. Per trouble report from Hannu Krosing.
for the sign of timezone offsets, ie, positive is east from UTC. These
were previously out of step with other operations that accept or show
timezones, such as I/O of timestamptz values.
query node, since that won't work unless the planner is upgraded.
Someday we should try to support at least some cases of this, but for
now just plug the hole in the dike. Per discussion with Dmitry Tkach.
shared_buffers and max_connections values to use before we run the
bootstrap process. Without this, initdb would fail on platforms where
the hardwired default values are too large. (We could get around that
by making the hardwired defaults tiny, perhaps, but why slow down
bootstrap by starving it for buffers...)
and 100 respectively, if the platform will allow it. initdb selects
values that are not too large to allow the postmaster to start, and
places these values in the installed postgresql.conf file. This allows
us to continue to start up out-of-the-box on platforms with small SHMMAX,
while having somewhat-realistic default settings on platforms with
reasonable SHMMAX. Per recent pghackers discussion.
without needing a running backend. Reorder postgresql.conf.sample
to match new layout of runtime.sgml. This commit re-adds work lost
in Wednesday's crash.
instead of the former kluge whereby gram.y emitted already-transformed
expressions. This is needed so that Params appearing in these clauses
actually work correctly. I suppose some might claim that the side effect
of 'SELECT ... LIMIT 2+2' working is a new feature, but I say this is
a bug fix.
It also works to create a non-polymorphic aggregate from polymorphic
functions, should you want to do that. Regression test added, docs still
lacking. By Joe Conway, with some kibitzing from Tom Lane.
ANYELEMENT. The effect is to postpone typechecking of the function
body until runtime. Documentation is still lacking.
Original patch by Joe Conway, modified to postpone type checking
by Tom Lane.
node emits only those vars that are actually needed above it in the
plan tree. (There were comments in the code suggesting that this was
done at some point in the dim past, but for a long time we have just
made join nodes emit everything that either input emitted.) Aside from
being marginally more efficient, this fixes the problem noted by Peter
Eisentraut where a join above an IN-implemented-as-join might fail,
because the subplan targetlist constructed in the latter case didn't
meet the expectation of including everything.
Along the way, fix some places that were O(N^2) in the targetlist
length. This is not all the trouble spots for wide queries by any
means, but it's a step forward.
'scalar op ALL (array)', where the operator is applied between the
lefthand scalar and each element of the array. The operator must
yield boolean; the result of the construct is the OR or AND of the
per-element results, respectively.
Original coding by Joe Conway, after an idea of Peter's. Rewritten
by Tom to keep the implementation strictly separate from subqueries.
comparison functions), replacing the highly bogus bitwise array_eq. Create
a btree index opclass for ANYARRAY --- it is now possible to create indexes
on array columns.
Arrange to cache the results of catalog lookups across multiple array
operations, instead of repeating the lookups on every call.
Add string_to_array and array_to_string functions.
Remove singleton_array, array_accum, array_assign, and array_subscript
functions, since these were for proof-of-concept and not intended to become
supported functions.
Minor adjustments to behavior in some corner cases with empty or
zero-dimensional arrays.
Joe Conway (with some editorializing by Tom Lane).
HH:MM:SS.SSS... when there is a nonzero part-of-a-day field in an
interval value. The seconds part used to be suppressed if zero,
but there's no equivalent behavior for timestamp, and since we're
modeling this format on timestamp it's probably wrong. Per complaint
and patch from Larry Rosenman.
This is no longer necessary or appropriate since we don't use zero typeid
as a wildcard anymore, and it fixes a nasty performance problem with
functions with many parameters. Per recent example from Reuven Lerner.
The attached fixes select_common_type() to support the below case:
create table t1( c1 int);
create domain dom_c1 int;
create table t2(c1 dom_c1);
select * from t1 join t2 using( c1 );
I didn't see a need for maintaining the domain as the preferred type. A
simple getBaseType() call on all elements of the list seems to be
enough.
--
Rod Taylor <rbt@rbt.ca>
- LIKE <subtable> [ INCLUDING DEFAULTS | EXCLUDING DEFAULTS ]
- Quick cleanup of analyze.c function prototypes.
- New non-reserved keywords (INCLUDING, EXCLUDING, DEFAULTS), SQL 200X
Opted not to extend for check constraints at this time.
As per the definition that it's user defined columns, OIDs are NOT
inherited.
Doc and Source patches attached.
--
Rod Taylor <rbt@rbt.ca>
> http://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/include/libpq/pqcomm.h.diff?r1=1.85&r2=1.86
>
> modified SockAddr, but no corresponding change was made here
> (fe-auth.c:612):
>
> case AUTH_REQ_KRB5:
> #ifdef KRB5
> if (pg_krb5_sendauth(PQerrormsg, conn->sock, &conn->laddr.in,
> &conn->raddr.in,
> hostname) != STATUS_OK)
>
> It's not obvious to me what the change ought to be though.
This patch should hopefully fix both kerberos 4 and 5.
Kurt Roeckx
>> actually having updated the tuple, [...] can we simply
>> set the HEAP_XMAX_INVALID hint bit of the tuple?
>
>AFAICS this is a reasonable thing to do.
Thanks for the confirmation. Here's a patch which also contains some
more noncritical changes to tqual.c:
. make code more readable by introducing local variables for xvac
. no longer two separate branches for aborted and crashed.
The actions were the same in all cases.
Manfred Koizar
restructures the deferred trigger queue. The fundamental change is to
put all the static variables to hold the deferred triggers in a single
structure.
Alvaro Herrera
Regression tests for IPv6 operations added.
Documentation updated to document IPv6 bits.
Stop treating IPv4 as an "unsigned int" and IPv6 as an array of
characters. Instead, always use the array of characters so we
can have one function fits all. This makes bitncmp(), addressOK(),
and several other functions "just work" on both address families.
add family() function which returns integer 4 or 6 for IPv4 or
IPv6. (See examples below) Note that to add this new function
you will need to dump/initdb/reload or find the correct magic
to add the function to the postgresql function catalogs.
IPv4 addresses always sort before IPv6.
On disk we use AF_INET for IPv4, and AF_INET+1 for IPv6 addresses.
This prevents the need for a dump and reload, but lets IPv6 parsing
work on machines without AF_INET6.
To select all IPv4 addresses from a table:
select * from foo where family(addr) = 4 ...
Order by and other bits should all work.
Michael Graff
specific hash functions used by hash indexes, rather than the old
not-datatype-aware ComputeHashFunc routine. This makes it safe to do
hash joining on several datatypes that previously couldn't use hashing.
The sets of datatypes that are hash indexable and hash joinable are now
exactly the same, whereas before each had some that weren't in the other.
character in identifiers. The first change eliminates the current need
to put spaces around parameter references, as in "x<=$2". The second
change improves compatibility with Oracle and some other RDBMSes. This
was discussed and agreed to back in January, but did not get done.
silently resolving them to type TEXT. This is comparable to what we
do when faced with UNKNOWN in CASE, UNION, and other contexts. It gets
rid of this and related annoyances:
select distinct f1, '' from int4_tbl;
ERROR: Unable to identify an ordering operator '<' for type unknown
This was discussed many moons ago, but no one got round to fixing it.
some cases of redundant clauses that were formerly not caught. We have
to special-case this because the clauses involved never get attached to
the same join restrictlist and so the existing logic does not notice
that they are redundant.
both clauses specify the same targets, rather than always using the
default ordering operator. This allows 'GROUP BY foo ORDER BY foo DESC'
to be done with only one sort step.
---------------------------------------------------------------------------
here is a patch that allows CIDR netmasks in pg_hba.conf. It allows two
address/mask forms:
. address/maskbits, or
. address netmask (as now)
If the patch is accepted I will submit a documentation patch to cover
it.
This is submitted by agreement with Kurt Roeckx, who has worked on a
patch that covers this and other IPv6 issues.
address/mask forms:
. address/maskbits, or
. address netmask (as now)
If the patch is accepted I will submit a documentation patch to cover
it.
This is submitted by agreement with Kurt Roeckx, who has worked on a
patch that covers this and other IPv6 issues.
Andrew Dunstan
free'd for every transaction or statement, respectively. This patch
puts these data structures into static memory, thus saving a few CPU
cycles and two malloc calls per transaction or (in isolation level
READ COMMITTED) per query.
Manfred Koizar
least-recently-used strategy from clog.c into slru.c. It doesn't
change any visible behaviour and passes all regression tests plus a
TruncateCLOG test done manually.
Apart from refactoring I made a little change to SlruRecentlyUsed,
formerly ClogRecentlyUsed: It now skips incrementing lru_counts, if
slotno is already the LRU slot, thus saving a few CPU cycles. To make
this work, lru_counts are initialised to 1 in SimpleLruInit.
SimpleLru will be used by pg_subtrans (part of the nested transactions
project), so the main purpose of this patch is to avoid future code
duplication.
Manfred Koizar
some reading on the subject.
1) PostgreSQL uses ephemeral keying, for its connections (good thing)
2) PostgreSQL doesn't set the cipher list that it allows (bad thing,
fixed)
3) PostgreSQL's renegotiation code wasn't text book correct (could be
bad, fixed)
4) The rate of renegotiating was insanely low (as Tom pointed out, set
to a more reasonable level)
I haven't checked around much to see if there are any other SSL bits
that need some review, but I'm doing some OpenSSL work right now
and'll send patches for improvements along the way (if I find them).
At the very least, the changes in this patch will make security folks
happier for sure. The constant renegotiation of sessions was likely a
boon to systems that had bad entropy gathering means (read: Slowaris
/dev/rand|/dev/urand != ANDIrand). The new limit for renegotiations
is 512MB which should be much more reasonable.
Sean Chittenden
I'd placed the check for newly created matching pk rows for on update no
action earlier than it needed to be so that it'd check even when the key
values hadn't changed. This patch moves it to after checking for NULLs
in the old row and comparing the values since the select's probably more
expensive.
Stephan Szabo
protocol 3, then falls back to 2 if postmaster rejects the startup packet
with an old-format error message. A side benefit of the rewrite is that
SSL-encrypted connections can now be made without blocking. (I think,
anyway, but do not have a good way to test.)
catalog lookups when not in a transaction. This prevents bizarre
failures if someone tries to set a value for session_authorization in
postgresql.conf. Per report from Fernando Nasser.
extensions to support our historical behavior. An aggregate belongs
to the closest query level of any of the variables in its argument,
or the current query level if there are no variables (e.g., COUNT(*)).
The implementation involves adding an agglevelsup field to Aggref,
and treating outer aggregates like outer variables at planning time.
when the plan is ReScanned, we don't have to rebuild the hash table
if there is no parameter change for its child node. This idea has
been used for a long time in Sort and Material nodes, but was not in
the hash code till now.
yy_fatal_error() call results in elog(ERROR) not exit(). This was
already fixed in the main lexer and plpgsql, but extend same technique
to all the other dot-l files. Also, on review of the possible calls
to yy_fatal_error(), it seems safe to use elog(ERROR) not elog(FATAL).
leave it for the kernel to do after the process dies. This allows clients
to wait for the backend to exit if they wish (after sending X message,
wait till EOF is detected on the socket).
introducing new 'FastList' list-construction subroutines to use in
hot spots. This avoids the O(N^2) behavior of repeated lappend's
by keeping a tail pointer, while not changing behavior by reversing
list order as the lcons() method would do.
of order; the 'server log' output is actually client output in these
scenarios and we ought to treat elevels the same way as in the client
case. This allows initdb to not send backend stderr to /dev/null anymore,
which makes it much more likely that people will notice problems during
initdb.
of an index can now be a computed expression instead of a simple variable.
Restrictions on expressions are the same as for predicates (only immutable
functions, no sub-selects). This fixes problems recently introduced with
inlining SQL functions, because the inlining transformation is applied to
both expression trees so the planner can still match them up. Along the
way, improve efficiency of handling index predicates (both predicates and
index expressions are now cached by the relcache) and fix 7.3 oversight
that didn't record dependencies of predicate expressions.
advertised in RowDescription message. Depending on the physical tuple's
column count is not really correct, since according to heap_getattr()
conventions the tuple may be short some columns, which will automatically
get read as nulls. Problem has been latent since forever, but was only
exposed by recent change to skip a projection step in SELECT * FROM...
blanks, in hopes of reducing the surprise factor for newbies. Remove
redundant operators for VARCHAR (it depends wholly on TEXT operations now).
Clean up resolution of ambiguous operators/functions to avoid surprising
choices for domains: domains are treated as equivalent to their base types
and binary-coercibility is no longer considered a preference item when
choosing among multiple operators/functions. IsBinaryCoercible now correctly
reflects the notion that you need *only* relabel the type to get from type
A to type B: that is, a domain is binary-coercible to its base type, but
not vice versa. Various marginal cleanup, including merging the essentially
duplicate resolution code in parse_func.c and parse_oper.c. Improve opr_sanity
regression test to understand about binary compatibility (using pg_cast),
and fix a couple of small errors in the catalogs revealed thereby.
Restructure "special operator" handling to fetch operators via index opclasses
rather than hardwiring assumptions about names (cleans up the pattern_ops
stuff a little).
single-byte encodings, and a direct C implementation of the single-argument
forms (where spaces are always what gets trimmed). This is in preparation
for using rtrim1() as the bpchar-to-text cast operator, but is a useful
performance improvement even if we decide not to do that.
example from Rao Kumar. This is a very corner corner-case, requiring
a minimum of three closely-spaced database crashes and an unlucky
positioning of the second recovery's checkpoint record before you'd notice
any problem. But the consequences are dire enough that it's a must-fix.
Ross Reedstrom, a couple months back) and to detect timezones that are
using leap-second timekeeping. The unknown-zone-name test is pretty
heuristic and ugly, but it seems better than the old behavior of just
switching to GMT given a bad name. Also make DecodePosixTimezone() a
tad more robust.
Win32 port is now called 'win32' rather than 'win'
add -lwsock32 on Win32
make gethostname() be only used when kerberos4 is enabled
use /port/getopt.c
new /port/opendir.c routines
disable GUC unix_socket_group on Win32
convert some keywords.c symbols to KEYWORD_P to prevent conflict
create new FCNTL_NONBLOCK macro to turn off socket blocking
create new /include/port.h file that has /port prototypes, move
out of c.h
new /include/port/win32_include dir to hold missing include files
work around ERROR being defined in Win32 includes
only remnant of this failed experiment is that the server will take
SET AUTOCOMMIT TO ON. Still TODO: provide some client-side autocommit
logic in libpq.
the type OID and typmod of the underlying base type. Per discussions
a few weeks ago with Andreas Pflug and others. Note that this behavioral
change affects both old- and new-protocol clients.
This makes no difference for existing uses, but allows SelectSortFunction()
and pred_test_simple_clause() to use indexscans instead of seqscans to
locate entries for a particular operator in pg_amop. Better yet, they can
use the SearchSysCacheList() API to cache the search results.
dropped. The simplest fix for INSERT/UPDATE cases turns out to be for
preptlist.c to insert NULLs of a known-good type (I used INT4) rather
than making them match the deleted column's type. Since the representation
of NULL is actually datatype-independent, this should work fine.
I also re-reverted the patch to disable the use_physical_tlist optimization
in the presence of dropped columns. It still doesn't look worth the
trouble to be smarter, if there are no other bugs to fix.
Added a regression test to catch future problems in this area.
where the table contains dropped columns. If the columns are dropped,
then their types may be gone as well, which causes ExecTypeFromTL() to
fail if the dropped columns appear in a plan node's tlist. This could
be worked around but I don't think the optimization is valuable enough
to be worth the trouble.
detected during buffer dump to be labeled with the buffer location.
For example, if a page LSN is clobbered, we now produce something like
ERROR: XLogFlush: request 2C000000/8468EC8 is not satisfied --- flushed only
to 0/8468EF0
CONTEXT: writing block 0 of relation 428946/566240
whereas before there was no convenient way to find out which page had
been trashed.
dead xlog segments are not considered part of a critical section. It is
not necessary to force a database-wide panic if we get a failure in these
operations. Per recent trouble reports.
handle multiple 'formats' for data I/O. Restructure CommandDest and
DestReceiver stuff one more time (it's finally starting to look a bit
clean though). Code now matches latest 3.0 protocol document as far
as message formats go --- but there is no support for binary I/O yet.
of Describe on a prepared statement. This was in the original 3.0
protocol proposal, but I took it out for reasons that seemed good at
the time. Put it back per yesterday's pghackers discussion.
Describe would claim that no tuples will be returned. Only affects
SELECTs added to non-SELECT base queries by rewrite rules. If you
want to see the output of such a select, you gotta use 'simple Query'
protocol.
DestReceiver pointers instead of just CommandDest values. The DestReceiver
is made at the point where the destination is selected, rather than
deep inside the executor. This cleans up the original kluge implementation
of tstoreReceiver.c, and makes it easy to support retrieving results
from utility statements inside portals. Thus, you can now do fun things
like Bind and Execute a FETCH or EXPLAIN command, and it'll all work
as expected (e.g., you can Describe the portal, or use Execute's count
parameter to suspend the output partway through). Implementation involves
stuffing the utility command's output into a Tuplestore, which would be
kind of annoying for huge output sets, but should be quite acceptable
for typical uses of utility commands.
the column by table OID and column number, if it's a simple column
reference. Along the way, get rid of reskey/reskeyop fields in Resdoms.
Turns out that representation was not convenient for either the planner
or the executor; we can make the planner deliver exactly what the
executor wants with no more effort.
initdb forced due to change in stored rule representation.
which does the same thing. Perhaps at one time there was a reason to
allow plan nodes to store their result types in different places, but
AFAICT that's been unnecessary for a good while.
Per recent discussion on pgsql-general, this is appropriate for spec
compliance, and has the nice side-effect of easing porting from old
pg_dump files that exhibit the 59.999=>60.000 roundoff problem.
implementation limits, do not issue an ERROR; instead issue a NOTICE and use
the max supported value. Per pgsql-general discussion of 28-Apr, this is
needed to allow easy porting from pre-7.3 releases where the limits were
higher.
Unrelated change in same area: accept GLOBAL TEMP/TEMPORARY as a synonym
for TEMPORARY, as per pgsql-hackers discussion of 15-Apr. We previously
rejected it, but that was based on a misreading of the spec --- SQL92's
GLOBAL temp tables are really closer to what we have than their LOCAL ones.
Both plannable queries and utility commands are now always executed
within Portals, which have been revamped so that they can handle the
load (they used to be good only for single SELECT queries). Restructure
code to push command-completion-tag selection logic out of postgres.c,
so that it won't have to be duplicated between simple and extended queries.
initdb forced due to addition of a field to Query nodes.
Without this fix, CVS tip dumps core when running the regression tests
with geqo_threshold = 2. I would think that a similar patch might be
needed in 7.3, but cannot duplicate the failure in that branch --- so
for now, leave well enough alone.
of extended query features in new FE/BE protocol. TransactionCommandContext
is gone (PortalContext replaces it for some purposes), and QueryContext
has taken on a new meaning (MessageContext plays its old role).
that the types of untyped string-literal constants are deduced (ie,
when coerce_type is applied to 'em, that's what the type must be).
Remove the ancient hack of storing the input Param-types array as a
global variable, and put the info into ParseState instead. This touches
a lot of files because of adjustment of routine parameter lists, but
it's really not a large patch. Note: PREPARE statement still insists on
exact specification of parameter types, but that could easily be relaxed
now, if we wanted to do so.
I had inadvertently omitted it while rearranging things to support
length-counted incoming messages. Also, change the parser's API back
to accepting a 'char *' query string instead of 'StringInfo', as the
latter wasn't buying us anything except overhead. (I think when I put
it in I had some notion of making the parser API 8-bit-clean, but
seeing that flex depends on null-terminated input, that's not really
ever gonna happen.)
in all cases, leaked TopMemoryContext memory in others. Make the
interaction between SetClientEncoding and InitializeClientEncoding
cleaner and better documented. I suspect these changes should be
back-patched into 7.3, but will wait on Tatsuo's verification.
for tableID/columnID in RowDescription. (The latter isn't really
implemented yet though --- the backend always sends zeroes, and libpq
just throws away the data.)
initial values and runtime changes in selected parameters. This gets
rid of the need for an initial 'select pg_client_encoding()' query in
libpq, bringing us back to one message transmitted in each direction
for a standard connection startup. To allow server version to be sent
using the same GUC mechanism that handles other parameters, invent the
concept of a never-settable GUC parameter: you can 'show server_version'
but it's not settable by any GUC input source. Create 'lc_collate' and
'lc_ctype' never-settable parameters so that people can find out these
settings without need for pg_controldata. (These side ideas were all
discussed some time ago in pgsql-hackers, but not yet implemented.)
into a UNION that has some type coercions applied to the component
queries, so long as the qual itself does not reference any columns that
have such coercions. Per example from Jonathan Bartlett 24-Apr-03.
rewritten and the protocol is changed, but most elog calls are still
elog calls. Also, we need to contemplate mechanisms for controlling
all this functionality --- eg, how much stuff should appear in the
postmaster log? And what API should libpq expose for it?
have length words. COPY OUT reimplemented per new protocol: it doesn't
need \. anymore, thank goodness. COPY BINARY to/from frontend works,
at least as far as the backend is concerned --- libpq's PQgetline API
is not up to snuff, and will have to be replaced with something that is
null-safe. libpq uses message length words for performance improvement
(no cycles wasted rescanning long messages), but not yet for error
recovery.
Output \r\n termination on Win32.
Disallow literal carriage return as a data value,
backslash-carriage-return and \r still allowed.
Doc changes already committed.
with variable-width fields. No more truncation of long user names.
Also, libpq can now send its environment-variable-driven SET commands
as part of the startup packet, saving round trips to server.
account for NULLs; in hindsight this is obvious since the code for
the MCV-lists case would reduce to this when there are zero entries
in both lists. Per example from Alec Mitchell.
expressions, ARRAY(sub-SELECT) expressions, some array functions.
Polymorphic functions using ANYARRAY/ANYELEMENT argument and return
types. Some regression tests in place, documentation is lacking.
Joe Conway, with some kibitzing from Tom Lane.
on UPDATE.
This get's rid of the long standing annoyance that updating a row
that has foreign keys locks all the referenced rows even if the
foreign key values do not change.
The trick is to actually do a check identical to NO ACTION after an
eventually done UPDATE in the SET DEFAULT case. Since a SET DEFAULT
operation should have moved referencing rows to a new "home", a following
NO ACTION check can only fail if the column defaults of the referencing
table resulted in the key we actually deleted. Thanks to Stephan.
Jan
find out about it is to read the documentation that tells you how
dangerous it is. Add default_transaction_read_only to documentation;
seems to have been overlooked in patch that added read-only transactions.
Clean up check_guc comparison script, which has been suffering bit rot.
an unneeded memory context and some variables that are not used anymore.
It's pretty trivial and the regression tests pass fine. There's no
change in functionality, only deletion of unused code. I left an empty
function because maybe I'll need it for nested transactions.
Alvaro Herrera
harmless on signed-char machines but would lead to core dump in the
deadlock detection code if char is unsigned. Amazingly, this bug has
been here since 7.1 and yet wasn't reported till now. Thanks to Robert
Bruccoleri for providing the opportunity to track it down.
typing error in src/backend/libpq/be-secure.c ???
Long Description
In src/backend/libpq/be-secure.c: secure_write
on SSL_ERROR_WANT_WRITE call secure_read instead
secure_write again. May be is this a typing error?
Sergey N. Yatskevich (syatskevich@n21lab.gosniias.msk.ru)
page when it's read in, per pghackers discussion around 17-Feb. Add a
GUC variable zero_damaged_pages that causes the response to be a WARNING
followed by zeroing the page, rather than the normal ERROR; this is per
Hiroshi's suggestion that there needs to be a way to get at the data
in the rest of the table.
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
NULL key pointer, indicating that the existing scan key should be reused.
This behavior isn't used yet but will be needed for my planned fix to
the keys_are_unique code.
them as arrays of the internal datatype. This requires treating the
stavalues columns as 'anyarray' rather than 'text[]', which is not 100%
kosher but seems to work fine for the purposes we need for pg_statistic.
Perhaps in the future 'anyarray' will be allowed more generally.
refers to a non-DISTINCT output column of a DISTINCT ON subquery, or
if it refers to a function-returning-set, we cannot push it down.
But the old implementation refused to push down *any* quals if the
subquery had any such 'dangerous' outputs. Now we just look at the
output columns actually referenced by each qual expression. More code
than before, but probably no slower since we don't make unnecessary checks.
some of the algorithms for higher functions. I see about a factor of ten
speedup on the 'numeric' regression test, but it's unlikely that that test
is representative of real-world applications.
initdb forced due to change of on-disk representation for NUMERIC.
Add ALTER SEQUENCE to modify min/max/increment/cache/cycle values
Also updated create sequence docs to mention NO MINVALUE, & NO MAXVALUE.
New Files:
doc/src/sgml/ref/alter_sequence.sgml
src/test/regress/expected/sequence.out
src/test/regress/sql/sequence.sql
ALTER SEQUENCE is NOT transactional. It behaves similarly to setval().
It matches the proposed SQL200N spec, as well as Oracle in most ways --
Oracle lacks RESTART WITH for some strange reason.
--
Rod Taylor <rbt@rbt.ca>
what is capable using integer-datatime timestamps. It does attempt
to exercise the maximum allowable timestamp range.
Also is a small error check when converting a timestamp from external
to internal format that prevents out of range timestamps from being
entered.
Files patched:
Index: src/backend/utils/adt/timestamp.c
Added range check to prevent out of range timestamps
from being used.
Index: src/test/regress/sql/horology.sql
Index: src/test/regress/expected/horology-no-DST-before-1970.out
Index: src/test/regress/expected/horology-solaris-1947.out
Limited range of timestamps being checked to
Jan 1, 4713 BC to Dec 31, 294276
In creating this patch, I have seen some definite problems with integer
timestamps and how they react when used near their limits. For example,
the following statement gives the correct result:
SELECT timestamp without time zone 'Jan 1, 4713 BC'
+ interval '109203489 days' AS "Dec 31, 294276";
However, this statement which is the logical inverse of the above
gives incorrect results:
SELECT timestamp without time zone '12/31/294276'
- timestamp without time zone 'Jan 1, 4713 BC' AS "109203489 Days";
John Cochran
patch fix it -- but this patch doesn't contains tests or docs fixes. I
will send it later.
Fixed outputs:
select to_char(x, '9999.999') as x,
to_char(x, 'S9999.999') as s,
to_char(x, 'SG9999.999') as sg,
to_char(x, 'MI9999.999') as mi,
to_char(x, 'PL9999.999') as pl,
to_char(x, 'PLMI9999.999') as plmi,
to_char(x, '9999.999SG') as sg2,
to_char(x, '9999.999PL') as pl2,
to_char(x, '9999.999MI') as mi2 from num;
Karel Zak
> >
> > - Add check in pg_dump to see if the value returned is the max /min
> > values and replace with NO MAXVALUE, NO MINVALUE.
> >
> > - Change START and INCREMENT to use START WITH and INCREMENT BY syntax.
> > This makes it a touch easier to port to other databases with sequences
> > (Oracle). PostgreSQL supports both syntaxes already.
>
> + char bufm[100],
> + bufx[100];
>
> This seems to be an arbitary size. Why not set it to the actual maximum
> length?
>
> Also:
>
> + snprintf(bufm, 100, INT64_FORMAT, SEQ_MINVALUE);
> + snprintf(bufx, 100, INT64_FORMAT, SEQ_MAXVALUE);
>
> sizeof(bufm), sizeof(bufx) is probably the more
> maintenance-friendly/standard way to do it.
I changed the code to use sizeof - but will wait for a response from
Peter before changing the size. It's consistent throughout the sequence
code to be 100 for this purpose.
Rod Taylor <rbt@rbt.ca>
> weird behavior across fork boundaries; (b) the additional memory space
> that has to be duplicated into child processes will cost something per
> child launch, even if the child never uses it. But these are only
> arguments that it might not *always* be a prudent thing to do, not that
> we shouldn't give the DBA the tool to do it if he wants. So fire away.
Here is a patch for the above, including a documentation update. It
creates a new GUC variable "preload_libraries", that accepts a list in
the form:
preload_libraries = '$libdir/mylib1:initfunc,$libdir/mylib2'
If ":initfunc" is omitted or not found, no initialization function is
executed, but the library is still preloaded. If "$libdir/mylib" isn't
found, the postmaster refuses to start.
In my testing with PL/R, it reduces the first call to a PL/R function
(after connecting) from almost 2 seconds, down to about 8 ms.
Joe Conway
This bug has been latent since 7.0 or maybe even further back, but it
was only exposed when parse_clause.c stopped suppressing duplicate
items (see its rev 1.96 of 18-Aug-02).
division and modulo functions, to avoid problems on OS X (which fails to
trap 0 divide at all) and Windows (which traps it in some bizarre
nonstandard fashion). Standardize on 'division by zero' as the one true
spelling of this error message. Add regression tests as suggested by
Neil Conway.
utility statement (DeclareCursorStmt) with a SELECT query dangling from
it, rather than a SELECT query with a few unusual fields in it. Add
code to determine whether a planned query can safely be run backwards.
If DECLARE CURSOR specifies SCROLL, ensure that the plan can be run
backwards by adding a Materialize plan node if it can't. Without SCROLL,
you get an error if you try to fetch backwards from a cursor that can't
handle it. (There is still some discussion about what the exact
behavior should be, but this is necessary infrastructure in any case.)
Along the way, make EXPLAIN DECLARE CURSOR work.
entire contents of the subplan into the tuplestore before we can return
any tuples. Instead, the tuplestore holds what we've already read, and
we fetch additional rows from the subplan as needed. Random access to
the previously-read rows works with the tuplestore, and doesn't affect
the state of the partially-read subplan. This is a step towards fixing
the problems with cursors over complex queries --- we don't want to
stick in Materialize nodes if they'll prevent quick startup for a cursor.
cleaning out temp namespaces. We don't really want the server log to be
cluttered with 'Drop cascades to table foo' every time someone uses a
temp table...
at database shutdown, and then load it again at database startup. This
preserves our hard-won knowledge of free space across restarts (given
an orderly shutdown, that is).
DELETE of an inheritance tree references another inherited relation.
This bug has been latent since 7.1; I'm still not quite sure why 7.1 and
7.2 don't manifest it (at least, they don't crash on a simple test case).
Adjustable threshold is gone in favor of keeping track of total requested
page storage and doling out proportional fractions to each relation
(with a minimum amount per relation, and some quantization of the results
to avoid thrashing with small changes in page counts). Provide special-
case code for indexes so as not to waste space storing useless page
free space counts. Restructure internal data storage to be a flat array
instead of list-of-chunks; this may cost a little more work in data
copying when reorganizing, but allows binary search to be used during
lookup_fsm_page_entry().
the join, per recent discussion on pgsql-sql. Not clear that this will
come up often in real queries, but it's not any more expensive to do it
right, so we may as well do it right.
is assumed to be in local time, not GMT. This improves consistency with
other operations, which all assume local timezone when it matters. Per
bug #897.
recursion in RewriteQuery(); also, detect recursion in fireRIRrules(),
so as to catch self-referential views per example from Ryan VanderBijl.
Minor code restructuring to make it easier to catch recursive case.
end of a btree index. This isn't super-effective, since we won't move
nondeletable pages, but it's better than nothing. Also, improve stats
displayed during VACUUM VERBOSE.
deleting multiple index entries on a single index page. This makes for
a very substantial reduction in the amount of WAL traffic during a
large delete operation.