end of the block:
<<label>>
begin
...
end label;
Similarly for loops. This is per PL/SQL. Update the documentation and
add regression tests. Patch from Pavel Stehule, code review by Neil
Conway.
basic regression tests for GiST to the standard regression tests.
I took the opportunity to add an rtree-equivalent gist opclass for
circles; the contrib version only covered boxes and polygons, but
indexing circles is very handy for distance searches.
pgtypeslib: (1) we need to check the return value of sub_abs() (2) we
need to check the return value of 4 calls to digitbuf_alloc().
Per Coverity static analysis performed by EnterpriseDB.
non-NULL in this function, so there is no need to check for it (2) we
should check the return value of pgtypes_strdup(). Patch from Eric
Astor at EnterpriseDB, with slight cleanup by myself, per a report
from the Coverity tool.
or zero-filled; therefore zero-filling it via memset() is pointless.
(I think setting `errno' is probably a waste of cycles as well, but I
haven't changed that.)
malloc() succeeded but the second failed, the buffer allocated by the
first malloc() would be leaked. Fix this by allocating both buffers
via a single malloc(), as suggested by Tom.
Per Coverity static analysis performed by EnterpriseDB.
the difference between checkpoints forced due to WAL segment consumption
and checkpoints forced for other reasons (such as CREATE DATABASE). Avoid
generating 'checkpoints are occurring too frequently' messages when the
checkpoint wasn't caused by WAL segment consumption. Per gripe from
Chris K-L.
current time: provide a GetCurrentTimestamp() function that returns
current time in the form of a TimestampTz, instead of separate time_t
and microseconds fields. This is what all the callers really want
anyway, and it eliminates low-level dependencies on AbsoluteTime,
which is a deprecated datatype that will have to disappear eventually.
role memberships; make superuser/createrole distinction do something
useful; fix some locking and CommandCounterIncrement issues; prevent
creation of loops in the membership graph.
syntactic conflicts, both privilege and role GRANT/REVOKE commands have
to use the same production for scanning the list of tokens that might
eventually turn out to be privileges or role names. So, change the
existing GRANT/REVOKE code to expect a list of strings not pre-reduced
AclMode values. Fix a couple other minor issues while at it, such as
InitializeAcl function name conflicting with a Windows system function.
and pg_auth_members. There are still many loose ends to finish in this
patch (no documentation, no regression tests, no pg_dump support for
instance). But I'm going to commit it now anyway so that Alvaro can
make some progress on shared dependencies. The catalog changes should
be pretty much done.
- full concurrency for insert/update/select/vacuum:
- select and vacuum never locks more than one page simultaneously
- select (gettuple) hasn't any lock across it's calls
- insert never locks more than two page simultaneously:
- during search of leaf to insert it locks only one page
simultaneously
- while walk upward to the root it locked only parent (may be
non-direct parent) and child. One of them X-lock, another may
be S- or X-lock
- 'vacuum full' locks index
- improve gistgetmulti
- simplify XLOG records
Fix bug in index_beginscan_internal: LockRelation may clean
rd_aminfo structure, so move GET_REL_PROCEDURE after LockRelation
name matches the name of any parent-table constraint, without looking
at the constraint text. This is a not-very-bulletproof workaround for
the problem exhibited by Berend Tober last month. We really ought to
record constraint inheritance status in pg_constraint, but it's looking
like that may not get done for 8.1 --- and even if it does, we will
need this kluge for dumping from older servers.
with a table that has a small predicted size. Avoids wasting several
hundred K on the timezone hash table, which is likely to have only one
or a few entries, but the entries use up 10Kb apiece ...
with main, avoid using a SQL-defined SQLSTATE for what is most definitely
not a SQL-compatible error condition, fix documentation omissions,
adhere to message style guidelines, don't use two GUC_REPORT variables
when one is sufficient. Nothing done about pg_dump issues.
literally.
Add GUC variables:
"escape_string_warning" - warn about backslashes in non-E strings
"escape_string_syntax" - supports E'' syntax?
"standard_compliant_strings" - treats backslashes literally in ''
Update code to use E'' when escapes are used.
should fix the recent reports of "index is not a btree" failures,
as well as preventing a more obscure race condition involving changes
to a template database just after copying it with CREATE DATABASE.
to the existing X-direction tests. An rtree class now includes 4 actual
2-D tests, 4 1-D X-direction tests, and 4 1-D Y-direction tests.
This involved adding four new Y-direction test operators for each of
box and polygon; I followed the PostGIS project's lead as to the names
of these operators.
NON BACKWARDS COMPATIBLE CHANGE: the poly_overleft (&<) and poly_overright
(&>) operators now have semantics comparable to box_overleft and box_overright.
This is necessary to make r-tree indexes work correctly on polygons.
Also, I changed circle_left and circle_right to agree with box_left and
box_right --- formerly they allowed the boundaries to touch. This isn't
actually essential given the lack of any r-tree opclass for circles, but
it seems best to sync all the definitions while we are at it.
CURRENT_TIME, and LOCALTIME: now they just produce "timestamptz" not
"timestamptz(6)", etc. This makes the behavior more consistent with our
choice to not assign a specific default precision to column datatypes.
It should also save a few cycles at runtime due to not having to invoke
the round-to-given-precision functions.
I also took the opportunity to translate CURRENT_TIMESTAMP into "now()"
instead of an invocation of the timestamptz input converter --- this should
save a few cycles too.
polygon operators (<<, &<, >>, &>). Per ideas originally put forward
by andrew@supernews and later rediscovered by moi. This patch just
fixes the existing opclasses, and does not add any new behavior as I
proposed earlier; that can be sorted out later. In principle this
could be back-patched, since it changes only search behavior and not
system catalog entries nor rtree index contents. I'm not currently
planning to do that, though, since I think it could use more testing.
in the database. The old behavior (reindex system catalogs only) is now
available as REINDEX SYSTEM. I did not add the complementary REINDEX USER
case since there did not seem to be consensus for this, but it would be
trivial to add later. Per recent discussions.
argument list contains parameter symbols ($n) declared as type VOID,
discard these arguments. This allows the driver to avoid renumbering
mixed IN and OUT argument placeholders (the JDBC syntax involves writing
? for both IN and OUT parameters, but on the server side we don't think
that OUT parameters are arguments). This doesn't break any currently-
useful cases since VOID is not used as an input argument type.
we need to be careful to reset rc to PLPGSQL_RC_OK, depending on how
the loop's logic is structured. If we continue a loop but it then
exits without executing the loop's body again, we want to return
PLPGSQL_RC_OK to our caller. Enhance the regression tests to catch
this problem. Per report from Michael Fuhr.
(1) The code doesn't initialize `sum', so the initial "does the checksum
match?" test is wrong.
(2) The loop that is intended to check for a "null block" just checks
the first byte of the tar block 512 times, rather than each of the
512 bytes one time (!), which I'm guessing was the intent.
It was only through sheer luck that this worked in the first place.
Per Coverity static analysis performed by EnterpriseDB.
is redundant after a check has already been made for "num < 0". The "set"
variable can also be removed, as it is now no longer used. Per checking
with Karel, this is the right fix.
Per Coverity static analysis performed by EnterpriseDB.
using the recently added lo_create() function. The restore logic in
pg_restore is greatly simplified as well, since there's no need anymore
to try to adjust database references to match a new set of blob OIDs.
unlike template0 and template1 does not have any special status in
terms of backend functionality. However, all external utilities such
as createuser and createdb now connect to "postgres" instead of
template1, and the documentation is changed to encourage people to use
"postgres" instead of template1 as a play area. This should fix some
longstanding gotchas involving unexpected propagation of database
objects by createdb (when you used template1 without understanding
the implications), as well as ameliorating the problem that CREATE
DATABASE is unhappy if anyone else is connected to template1.
Patch by Dave Page, minor editing by Tom Lane. All per recent
pghackers discussions.
malformed ident map file. This was introduced by the linked list
rewrite in 8.0 -- mea maxima culpa.
Per Coverity static analysis performed by EnterpriseDB.
only used in one branch of an if statement, so we can move its
declaration to that block. This also avoids an unnecessary syscache
lookup.
Per Coverity static analysis performed by EnterpriseDB.
NULL (e.g. due to the preceding strlen()). Therefore we needn't recheck
this before initializing 'e_text'.
Per Coverity static analysis performed by EnterpriseDB.
(due to the preceding strlen(), for example), so we needn't recheck this
before invoking pg_mbcliplen().
Per Coverity static analysis performed by EnterpriseDB.
It never leaked memory before PG 8.0, so none of the callers are
expecting this. Cleanest fix seems to be to make it allocate the needed
memory in estate->eval_econtext, where it will be cleaned up by
the next exec_eval_cleanup. Per report from Bill Rugolsky.
*fail*, to test that plpython didn't allow untrusted operations.
When we changed plpython to plpythonu because python didn't actually have
a secure sandbox mode, someone (probably me :-() misinterpreted the tests
as checking whether Python's file I/O works. Which is a stupid thing for
us to be testing. Remove it so we don't clutter the filesystem with
random temporary files.
copying/converting the new value, which meant that it failed badly on
"var := var" if var is of pass-by-reference type. Fix this and a similar
hazard in exec_move_row(); not sure that the latter can manifest before
8.0, but patch it all the way back anyway. Per report from Dave Chapeskie.
(a/k/a SELECT INTO). Instead, flush and fsync the whole relation before
committing. We do still need the WAL log when PITR is active, however.
Simon Riggs and Tom Lane.
2. improve vacuum for gist
- use FSM
- full vacuum:
- reforms parent tuple if it's needed
( tuples was deleted on child page or parent tuple remains invalid
after crash recovery )
- truncate index file if possible
3. fixes bugs and mistakes
scankeys arrays that it needs can never have more than INDEX_MAX_KEYS
entries, so it's reasonable to just allocate them as fixed-size local
arrays, and save the cost of palloc/pfree. Not a huge savings, but
a cycle saved is a cycle earned ...
includes error checking and an appropriate ereport(ERROR) message.
This gets rid of rather tedious and error-prone manipulation of errno,
as well as a Windows-specific bug workaround, at more than a dozen
call sites. After an idea in a recent patch by Heikki Linnakangas.
given reasonably short lifespans for prepared transactions, this should
mean that only a small minority of state files ever need to be fsynced
at all. Per discussion with Heikki Linnakangas.
not memcpy() to copy the offered key into the hash table during HASH_ENTER.
This avoids possible core dump if the passed key is located very near the
end of memory. Per report from Stefan Kaltenbrunner.
old suggestion by Oliver Jowett. Also, add a transaction column to the
pg_locks view to show the xid of each transaction holding or awaiting
locks; this allows prepared transactions to be properly associated with
the locks they own. There was already a column named 'transaction',
and I chose to rename it to 'transactionid' --- since this column is
new in the current devel cycle there should be no backwards compatibility
issue to worry about.
work if either of the join relations are empty. The logic is:
(1) if the inner relation's startup cost is less than the outer
relation's startup cost and this is not an outer join, read
a single tuple from the inner relation via ExecHash()
- if NULL, we're done
(2) read a single tuple from the outer relation
- if NULL, we're done
(3) build the hash table on the inner relation
- if hash table is empty and this is not an outer join,
we're done
(4) otherwise, do hash join as usual
The implementation uses the new MultiExecProcNode API, per a
suggestion from Tom: invoking ExecHash() now produces the first
tuple from the Hash node's child node, whereas MultiExecHash()
builds the hash table.
I had to put in a bit of a kludge to get the row count returned
for EXPLAIN ANALYZE to be correct: since ExecHash() is invoked to
return a tuple, and then MultiExecHash() is invoked, we would
return one too many tuples to EXPLAIN ANALYZE. I hacked around
this by just manually detecting this situation and subtracting 1
from the EXPLAIN ANALYZE row count.
> against rc1. It simply checks with GetDatabaseEncoding() if the current
> database is in UTF-8, and if so, sets the UTF-8 flag on the arguments
> that are passed to perl. This means that it isn't necessary to
> utf8::upgrade() every string, as perl has no way of knowing offhand
> that a string is UTF-8 -- but postgres does, because the database
> encoding is specified, so it makes sense to turn the flag on. You
> should also be able to properly manipulate UTF-8 strings now from
> plperl as opposed to plperlu, because otherwise you'd have to use
> encoding 'utf8' which was not allowed. It could also eliminate some
> unexpected bugs if you assume that perl knows the string is unicode.
It
> is enabled only for perl 5.6 and higher, so earlier versions will not
> be affected.
>
> I have been assured by crab that the patch is quite harmless and will
> not break anything. It would be great to see it in 8 final! :-)
David Kamholz
"AT TIME ZONE", and not just the shorlist previously available. For
example:
SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London';
works fine now. It will also obey whatever DST rules were in effect at
just that date, which the previous implementation did not.
It also supports the AT TIME ZONE on the timetz datatype. The whole
handling of DST is a bit bogus there, so I chose to make it use whatever
DST rules are in effect at the time of executig the query. not sure if
anybody is actuallyi *using* timetz though, it seems pretty
unpredictable just because of this...
Magnus Hagander
>> assuming this sideeffect is removed, though?
>
>I have no problem with the hashtable, only with preloading it with
>everything. What I'd like to see is that the table inherited at fork()
>contains just the data for the default timezone. (At least in the
>normal case where that setting hasn't been changed since postmaster
>start.)
Here's a patch doing this. Changes score_timezone not to use pg_tzset(),
and thus not loading all the zones in the cache. The actual timezone
being picked will be set using set_global_timezone() which in turn calls
pg_tzset() and loads it in the cache.
Magnus Hagander
it is sufficient to track whether a backend holds a lock or not, and
store information about transaction vs. session locks only in the
inside-the-backend LocalLockTable. Since there can now be but one
PROCLOCK per lock per backend, LockCountMyLocks() is no longer needed,
thus eliminating some O(N^2) behavior when a backend holds many locks.
Also simplify the LockAcquire/LockRelease API by passing just a
'sessionLock' boolean instead of a transaction ID. The previous API
was designed with the idea that per-transaction lock holding would be
important for subtransactions, but now that we have subtransactions we
know that this is unwanted. While at it, add an 'isTempObject' parameter
to LockAcquire to indicate whether the lock is being taken on a temp
table. This is not used just yet, but will be needed shortly for
two-phase commit.
part of service principal. If not set, any service principal matching
an entry in the keytab can be used.
NEW KERBEROS MATCHING BEHAVIOR FOR 8.1.
Todd Kover
if geqo_rand() returns exactly 1.0, resulting in failure due to indexing
off the end of the pool array. Also, since this is using inexact float math,
it seems wise to guard against roundoff error producing values slightly
outside the expected range. Per report from bug@zedware.org.
instead of just scalar variables. Add regression tests and update the
documentation. Along the way, remove some redundant error checking
code from exec_stmt_perform().
Original patch from Pavel Stehule, reworked by Neil Conway.
constraint while determining whether the index sort order matches the
query's ORDER BY. This for example allows an index on (x,y) to match
... WHERE x = 42 ORDER BY y;
It only works for btree indexes, but since those are the only ones we
currently have that are ordered at all, that's good enough for now.
Per popular demand.
nonconsecutive columns of a multicolumn index, as per discussion around
mid-May (pghackers thread "Best way to scan on-disk bitmaps"). This
turns out to require only minimal changes in btree, and so far as I can
see none at all in GiST. btcostestimate did need some work, but its
original assumption that index selectivity == heap selectivity was
quite bogus even before this.
mode to only affect the presentation of normal query results, not the
output of psql slash commands. Documentation updated. I also made
some unrelated minor psql cleanup. Per suggestion from Stuart Cooper.
a descriptor that uses the current transaction snapshot, rather than
SnapshotNow as it did before (and still does if INV_WRITE is set).
This means pg_dump will now dump a consistent snapshot of large object
contents, as it never could do before. Also, add a lo_create() function
that is similar to lo_creat() but allows the desired OID of the large
object to be specified. This will simplify pg_restore considerably
(but I'll fix that in a separate commit).
syntax for database connection parameters. It has been inside an
#ifdef NOT_USED block since 2001 or so and is marked as "broken", so
I don't think it is likely to be rehabilitated any time soon.
patch adds missing checks to the call sites of malloc(), strdup(),
PQmakeEmptyPGresult(), pqResultAlloc(), and pqResultStrdup(), and updates
the documentation. Per original report from Volkan Yazici about
PQmakeEmptyPGresult() not checking for malloc() failure.
These contain the SQLSTATE and error message of the current exception,
respectively. They are scope-local variables that are only defined
in exception handlers (so attempting to reference them outside an
exception handler is an error). Update the regression tests and the
documentation.
Also, do some minor related cleanup: export an unpack_sql_state()
function from the backend and use it to unpack a SQLSTATE into a
string, and add a free_var() function to pl_exec.c
Original patch from Pavel Stehule, review by Neil Conway.
history customizable through a variable named HISTFILE, analogous to
psql's already implemented HISTCONTROL and HISTSIZE variables, and
bash's HISTFILE-Variable.
The motivation was to be able to get psql to maintain separate
histories for separate databases. This is now easily achievable
through a line like the following in ~/.psqlrc:
\set HISTFILE ~/.psql_history-:DBNAME
Andreas Seltenreich
to a subquery if the outer query is simple enough that the LIMIT can
be reflected directly to the subquery. This didn't use to be very
interesting, because a subquery that couldn't have been flattened into
the upper query was usually not going to be very responsive to
tuple_fraction anyway. But with new code that allows UNION ALL subqueries
to pay attention to tuple_fraction, this is useful to do. In particular
this lets the optimization occur when the UNION ALL is directly inside
a view.
if the limit were directly applied to it. This does not actually
add a LIMIT plan node to the generated subqueries --- that would be
useless overhead --- but it does cause the planner to prefer fast-
start plans when the limit is small. After an idea from Phil Endecott.
this in turn causes CREATE TABLE AS in plpgsql to set ROW_COUNT.
This is how it behaved before 7.4; I had unintentionally changed the
behavior in a bit of sloppy micro-optimization.
pg_restore. It restores the given schemaname only. It can be used in
conjunction with the -t and other switches to make the selection very
fine grained.
Richard van den Bergg, CISSP
function that accepts a double precision argument assumed to be a Unix
epoch timestamp and returns timestamp with time zone, and accompanying
documentation.
Usage:
test=# select to_timestamp(200120400);
to_timestamp
------------------------
1976-05-05 14:00:00+09
(1 row)
Michael Glaesemann
psql. i.e. "\pset format troff-ms". The patch also corrects some
problems with the "latex" format, notably defining an extra column in
the output table, and correcting some alignment issues; it also
changes the output to match the border setting as documented in the
manual page and as shown with the "aligned" format.
The troff-ms output is mostly identical to the latex output allowing
for the differences between the two typesetters.
The output should be saved in a file and piped as follows:
cat file | tbl | troff -T ps -ms > file.ps
or
tbl file | troff -T ps -ms > file.ps
Because it contains tabs, you'll need to redirect psql output or use
"script", rather than pasting from a terminal window, due to the tabs
which can be replaced with spaces.
Roger Leigh
of a relation in a flat 'joininfo' list. The former arrangement grouped
the join clauses according to the set of unjoined relids used in each;
however, profiling on test cases involving lots of joins proves that
that data structure is a net loss. It takes more time to group the
join clauses together than is saved by avoiding duplicate tests later.
It doesn't help any that there are usually not more than one or two
clauses per group ...
as well as the existing pg_catalog entries for prefix and postfix %.
These have never been documented, though they did appear in one old
regression test. This avoids surprising behavior in cases like
"SELECT -25 % -10". Per recent discussion.
Note: although there is a catalog change here, I did not force initdb
since there's no harm in leaving the inaccessible entries in one's
copy of pg_operator.
transaction IDs, rather than like subtrans; in particular, the information
now survives a database restart. Per previous discussion, this is
essential for PITR log shipping and for 2PC.
last nextval() or setval() performed by the current session. Update the
docs, add regression tests, and bump the catalog version. Patch from
Dennis Björklund, various improvements by Neil Conway.
This allows the result of executing a SELECT to be assigned to a row
variable, record variable, or list of scalars. Docs and regression tests
updated. Per Pavel Stehule, improvements and cleanup by Neil Conway.
---------------------------------------------------------------------------
While playing around, I got the following error message:
--
FATAL: pre-existing shared memory block (key 5432001, ID 90898435) is
still in use
HINT: If you're sure there are no old server processes still running,
remove the shared memory block with the command "ipcrm", or just delete
the file "/home/hlinnaka/pgsql/data/postmaster.pid".
---
Thats normal because I used "kill -9 postmaster" to shut down.
The hint advises me to use "ipcrm", but there's the "ipcclean" script in
bin for just this purpose. The hint should probably advise to use
ipcclean.
The attached patch replaces all occurances of "ipcrm" with "ipcclean" in
src/backend/utils/init/miscinit.c and all the translations in
src/backend/po.
While reviewing the patch, I noticed a likely typo in hr.po. While I
don't
speak Croatian, the translation seems to advise to use the "icpm(1)"
command. I changed that to "ipcclean" too.
Heikki Linnakangas
up have the standard layout with unused space between pd_lower and pd_upper.
When this is set, XLogInsert will omit the unused space without bothering
to scan it to see if it's zero. That saves time in XLogInsert, and also
allows reversion of my earlier patch to make PageRepairFragmentation et al
explicitly re-zero freed space. Per suggestion by Heikki Linnakangas.
other_rel_list with a single array indexed by rangetable index.
This reduces find_base_rel from O(N) to O(1) without any real penalty.
While find_base_rel isn't one of the major bottlenecks in any profile
I've seen so far, it was starting to creep up on the radar screen
for complex queries --- so might as well fix it.
a new PlannerInfo struct, which is passed around instead of the bare
Query in all the planning code. This commit is essentially just a
code-beautification exercise, but it does open the door to making
larger changes to the planner data structures without having to muck
with the widely-known Query struct.
1. Rename spi_return_next to return_next.
2. Add a new test for return_next.
3. Update the expected output.
4. Update the documentation.
Abhijit Menon-Sen
representation as the jointree) with two lists of RTEs, one showing
the RTEs accessible by qualified names, and the other showing the RTEs
accessible by unqualified names. I think this is conceptually simpler
than what we did before, and it's sure a whole lot easier to search.
This seems to eliminate the parse-time bottleneck for deeply nested
JOIN structures that was exhibited by phil@vodafone.
---------------------------------------------------------------------------
Tom Lane <tgl@sss.pgh.pa.us> writes:
> a_ogawa <a_ogawa@hi-ho.ne.jp> writes:
> > It is a reasonable idea. However, the majority part of MemSet was not
> > able to be avoided by this idea. Because the per-tuple contexts are used
> > at the early stage of executor.
>
> Drat. Well, what about changing that? We could introduce additional
> contexts or change the startup behavior so that the ones that are
> frequently reset don't have any data in them unless you are working
> with pass-by-ref values inside the inner loop.
That might be possible. However, I think that we should change only
aset.c about this article.
I thought further: We can check whether context was used from the last
reset even when blocks list is not empty. Please see attached patch.
postgresql.conf.
---------------------------------------------------------------------------
Here's an updated version of the patch, with the following changes:
1) No longer uses "service name" as "application version". It's instead
hardcoded as "postgres". It could be argued that this part should be
backpatched to 8.0, but it doesn't make a big difference until you can
start changing it with GUC / connection parameters. This change only
affects kerberos 5, not 4.
2) Now downcases kerberos usernames when the client is running on win32.
3) Adds guc option for "krb_caseins_users" to make the server ignore
case mismatch which is required by some KDCs such as Active Directory.
Off by default, per discussion with Tom. This change only affects
kerberos 5, not 4.
4) Updated so it doesn't conflict with the rendevouz/bonjour patch
already in ;-)
Magnus Hagander
>
> > The second issue is where plperl returns a large result set.
I have attached the following seven patches to address this problem:
1. Trivial. Replaces some errant spaces with tabs.
2. Trivial. Fixes the spelling of Jan's name, and gets rid of many
inane, useless, annoying, and often misleading comments. Here's
a sample: "plperl_init_all() - Initialize all".
(I have tried to add some useful comments here and there, and will
continue to do so now and again.)
3. Trivial. Splits up some long lines.
4. Converts SRFs in PL/Perl to use a Tuplestore and SFRM_Materialize
to return the result set, based on the PL/PgSQL model.
There are two major consequences: result sets will spill to disk when
they can no longer fit in work_mem; and "select foo_srf()" no longer
works. (I didn't lose sleep over the latter, since that form is not
valid in PL/PgSQL, and it's not documented in PL/Perl.)
5. Trivial, but important. Fixes use of "undef" instead of undef. This
would cause empty functions to fail in bizarre ways. I suspect that
there's still another (old) bug here. I'll investigate further.
6. Moves the majority of (4) out into a new plperl_return_next()
function, to make it possible to expose the functionality to
Perl; cleans up some of the code besides.
7. Add an spi_return_next function for use in Perl code.
If you want to apply the patches and try them out, 8-composite.diff is
what you should use. (Note: my patches depend upon Andrew's use-strict
and %_SHARED patches being applied.)
Here's something to try:
create or replace function foo() returns setof record as $$
$i = 0;
for ("World", "PostgreSQL", "PL/Perl") {
spi_return_next({f1=>++$i, f2=>'Hello', f3=>$_});
}
return;
$$ language plperl;
select * from foo() as (f1 integer, f2 text, f3 text);
(Many thanks to Andrews Dunstan and Supernews for their help.)
Abhijit Menon-Sen
> a_ogawa <a_ogawa@hi-ho.ne.jp> writes:
> > It is a reasonable idea. However, the majority part of MemSet was not
> > able to be avoided by this idea. Because the per-tuple contexts are used
> > at the early stage of executor.
>
> Drat. Well, what about changing that? We could introduce additional
> contexts or change the startup behavior so that the ones that are
> frequently reset don't have any data in them unless you are working
> with pass-by-ref values inside the inner loop.
That might be possible. However, I think that we should change only
aset.c about this article.
I thought further: We can check whether context was used from the last
reset even when blocks list is not empty. Please see attached patch.
The effect of the patch that I measured is as follows:
o Execution time that executed the SQL ten times.
(1)Linux(CPU: Pentium III, Compiler option: -O2)
- original: 24.960s
- patched : 23.114s
(2)Linux(CPU: Pentium 4, Compiler option: -O2)
- original: 8.730s
- patched : 7.962s
(3)Solaris(CPU: Ultra SPARC III, Compiler option: -O2)
- original: 37.0s
- patched : 33.7s
Atsushi Ogawa (a_ogawa)
RTE of interest, rather than the whole rangetable list. This makes
the API more understandable and avoids duplicate RTE lookups. This
patch reverts no-longer-needed portions of my patch of 2004-08-19.
performance problem pointed out by phil@vodafone: to wit, we were
spending O(N^2) time to check dropped-ness in an N-deep join tree,
even in the case where the tree was freshly constructed and couldn't
possibly mention any dropped columns. Instead of recursing in
get_rte_attribute_is_dropped(), change the data structure definition:
the joinaliasvars list of a JOIN RTE must have a NULL Const instead
of a Var at any position that references a now-dropped column. This
costs nothing during normal parse-rewrite-plan path, and instead we
have a linear-time update to make when loading a stored rule that
might contain now-dropped columns. While at it, move the responsibility
for acquring locks on relations referenced by rules into this separate
function (which I therefore chose to call AcquireRewriteLocks).
This saves effort --- namely, duplicated lock grabs in parser and rewriter
--- in the normal path at a cost of one extra non-locked heap_open()
in the stored-rule path; seems a good tradeoff. A fringe benefit is
that it is now *much* clearer that we acquire lock on relations referenced
in rules before we make any rewriter decisions based on their properties.
(I don't know of any bug of that ilk, but it wasn't exactly clear before.)
to just around the bare recv() call that gets a command from the client.
The former placement in PostgresMain was unsafe because the intermediate
processing layers (especially SSL) use facilities such as malloc that are
not necessarily re-entrant. Per report from counterstorm.com.
Instead of a separate CRC on each backup block, include backup blocks
in their parent WAL record's CRC; this is important to ensure that the
backup block really goes with the WAL record, ie there was not a page
tear right at the start of the backup block. Implement a simple form
of compression of backup blocks: drop any run of zeroes starting at
pd_lower, so as not to store the unused 'hole' that commonly exists in
PG heap and index pages. Tweak PageRepairFragmentation and related
routines to ensure they keep the unused space zeroed, so that the above
compression method remains effective. All per recent discussions.
and RelationNameGetTupleDesc() as deprecated; remove uses of the
latter in the contrib library. Along the way, clean up crosstab()
code and documentation a little.
conventions of only allowing octal, like \045. Remove support for
\decimal, \0octal, and \0xhex which matches the strtol() function but
didn't make sense with backslashes.
These now return the same character:
test=> \set x '\54'
test=> \echo :x
,
test=> \set x '\054'
test=> \echo :x
,
THIS IS A BACKWARD COMPATIBILITY CHANGE.
key, compare the new and old row versions. If the foreign key column has
not changed, we needn't enqueue the trigger, since the update cannot
violate the foreign key. This optimization was previously applied in the
RI trigger function, but it is more efficient to avoid firing the trigger
altogether. Per recent discussion on pgsql-hackers.
Also add a regression test for some unintuitive foreign key behavior, and
refactor some code that deals with the OIDs of the various RI trigger
functions.