Commit Graph

332 Commits

Author SHA1 Message Date
Tom Lane 21dcda2713 Allocate ParamListInfo once per plpgsql function, not once per expression.
setup_param_list() was allocating a fresh ParamListInfo for each query or
expression evaluation requested by a plpgsql function.  There was probably
once good reason to do it like that, but for a long time we've had a
convention that there's a one-to-one mapping between the function's
PLpgSQL_datum array and the ParamListInfo slots, which means that a single
ParamListInfo can serve all the function's evaluation requests: the data
that would need to be passed is the same anyway.

In this patch, we retain the pattern of zeroing out the ParamListInfo
contents during each setup_param_list() call, because some of the slots may
be stale and we don't know exactly which ones.  So this patch only saves a
palloc/pfree per evaluation cycle and nothing more; still, that seems to be
good for a couple percent overall speedup on simple-arithmetic type
statements.  In future, though, we might be able to improve matters still
more by managing the param array contents more carefully.

Also, unify the former use of estate->cur_expr with that of
paramLI->parserSetupArg; they both were used to point to the active
expression, so we can combine the variables into just one.
2015-03-11 12:40:43 -04:00
Tom Lane 7f3014dce5 Change plpgsql's cast cache to consider source typmod as significant.
I had thought that there was no need to maintain separate cache entries
for different source typmods, but further experimentation shows that there
is an advantage to doing so in some cases.  In particular, if a domain has
a typmod (say, "CREATE DOMAIN d AS numeric(20,0)"), failing to notice the
source typmod leads to applying a length-coercion step even when the
source has the correct typmod.
2015-03-04 20:23:13 -05:00
Tom Lane 45f2c2fc4e Need to special-case RECORD as well as UNKNOWN in plpgsql's casting logic.
This is because can_coerce_type thinks that RECORD can be cast to any
composite type, but coerce_record_to_complex only works for inputs that are
RowExprs or whole-row Vars, so we get a hard failure on a CaseTestExpr.

Perhaps these corner cases ought to be fixed so that coerce_to_target_type
actually returns NULL as per its specification, rather than failing ...
but for the moment an extra check here is the path of least resistance.
2015-03-04 19:10:57 -05:00
Tom Lane 1345cc67bb Use standard casting mechanism to convert types in plpgsql, when possible.
plpgsql's historical method for converting datatypes during assignments was
to apply the source type's output function and then the destination type's
input function.  Aside from being miserably inefficient in most cases, this
method failed outright in many cases where a user might expect it to work;
an example is that "declare x int; ... x := 3.9;" would fail, not round the
value to 4.

Instead, let's convert by applying the appropriate assignment cast whenever
there is one.  To avoid breaking compatibility unnecessarily, fall back to
the I/O conversion method if there is no assignment cast.

So far as I can tell, there is just one case where this method produces a
different result than the old code in a case where the old code would not
have thrown an error.  That is assignment of a boolean value to a string
variable (type text, varchar, or bpchar); the old way gave boolean's output
representation, ie 't'/'f', while the new way follows the behavior of the
bool-to-text cast and so gives 'true' or 'false'.  This will need to be
called out as an incompatibility in the 9.5 release notes.

Aside from handling many conversion cases more sanely, this method is
often significantly faster than the old way.  In part that's because
of more effective caching of the conversion info.
2015-03-04 11:04:30 -05:00
Tom Lane e524cbdc45 Track typmods in plpgsql expression evaluation and assignment.
The main value of this change is to avoid expensive I/O conversions when
assigning to a variable that has a typmod specification, if the value
to be assigned is already known to have the right typmod.  This is
particularly valuable for arrays with typmod specifications; formerly,
in an assignment to an array element the entire array would invariably
get put through double I/O conversion to check the typmod, to absolutely
no purpose since we'd already properly coerced the new element value.

Extracted from my "expanded arrays" patch; this seems worth committing
separately, whatever becomes of that patch, since it's really an
independent issue.

As long as we're changing the function signatures, take the opportunity
to rationalize the argument lists of exec_assign_value, exec_cast_value,
and exec_simple_cast_value; that is, put the arguments into a saner order,
and get rid of the bizarre choice to pass exec_assign_value's isNull flag
by reference.
2015-02-28 14:34:35 -05:00
Noah Misch f5ef00aed4 Free SQLSTATE and SQLERRM no earlier than other PL/pgSQL variables.
"RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed
memory.  Back-patch to 9.0 (all supported versions).  Little code ran
between the premature free and the read, so non-assert builds are
unlikely to witness user-visible consequences.
2015-02-25 23:48:28 -05:00
Tom Lane 9e3ad1aac5 Use fast path in plpgsql's RETURN/RETURN NEXT in more cases.
exec_stmt_return() and exec_stmt_return_next() have fast-path code for
handling a simple variable reference (i.e. "return var") without going
through the full expression evaluation machinery.  For some reason,
pl_gram.y was under the impression that this fast path only applied for
record/row variables; but in reality code for handling regular scalar
variables has been there all along.  Adjusting the logic to allow that
code to be used actually results in a net savings of code in pl_gram.y
(by eliminating some redundancy), and it buys a measurable though not
very impressive amount of speedup.

Noted while fooling with my expanded-array patch, wherein this makes a much
bigger difference because it enables returning an expanded array variable
without an extra flattening step.  But AFAICS this is a win regardless,
so commit it separately.
2015-02-16 15:28:48 -05:00
Tom Lane e983c4d1aa Rationalize the APIs of array element/slice access functions.
The four functions array_ref, array_set, array_get_slice, array_set_slice
have traditionally declared their array inputs and results as being of type
"ArrayType *".  This is a lie, and has been since Berkeley days, because
they actually also support "fixed-length array" types such as "name" and
"point"; not to mention that the inputs could be toasted.  These values
should be declared Datum instead to avoid confusion.  The current coding
already risks possible misoptimization by compilers, and it'll get worse
when "expanded" array representations become a valid alternative.

However, there's a fair amount of code using array_ref and array_set with
arrays that *are* known to be ArrayType structures, and there might be more
such places in third-party code.  Rather than cluttering those call sites
with PointerGetDatum/DatumGetArrayTypeP cruft, what I did was to rename the
existing functions to array_get_element/array_set_element, fix their
signatures, then reincarnate array_ref/array_set as backwards compatibility
wrappers.

array_get_slice/array_set_slice have no such constituency in the core code,
and probably not in third-party code either, so I just changed their APIs.
2015-02-16 12:23:58 -05:00
Bruce Momjian 4baaf863ec Update copyright for 2015
Backpatch certain files through 9.0
2015-01-06 11:43:47 -05:00
Tom Lane f4e031c662 Add bms_next_member(), and use it where appropriate.
This patch adds a way of iterating through the members of a bitmapset
nondestructively, unlike the old way with bms_first_member().  While
bms_next_member() is very slightly slower than bms_first_member()
(at least for typical-size bitmapsets), eliminating the need to palloc
and pfree a temporary copy of the target bitmapset is a significant win.
So this method should be preferred in all cases where a temporary copy
would be necessary.

Tom Lane, with suggestions from Dean Rasheed and David Rowley
2014-11-28 13:37:25 -05:00
Heikki Linnakangas c1008f0037 Check number of parameters in RAISE statement at compile time.
The number of % parameter markers in RAISE statement should match the number
of parameters given. We used to check that at execution time, but we have
all the information needed at compile time, so let's check it at compile
time instead. It's generally better to find mistakes earlier.

Marko Tiikkaja, reviewed by Fabien Coelho
2014-09-02 15:56:50 +03:00
Bruce Momjian 0a78320057 pgindent run for 9.4
This includes removing tabs after periods in C comments, which was
applied to back branches, so this change should not effect backpatching.
2014-05-06 12:12:18 -04:00
Tom Lane 3f8c8e3c61 Fix failure to detoast fields in composite elements of structured types.
If we have an array of records stored on disk, the individual record fields
cannot contain out-of-line TOAST pointers: the tuptoaster.c mechanisms are
only prepared to deal with TOAST pointers appearing in top-level fields of
a stored row.  The same applies for ranges over composite types, nested
composites, etc.  However, the existing code only took care of expanding
sub-field TOAST pointers for the case of nested composites, not for other
structured types containing composites.  For example, given a command such
as

UPDATE tab SET arraycol = ARRAY[(ROW(x,42)::mycompositetype] ...

where x is a direct reference to a field of an on-disk tuple, if that field
is long enough to be toasted out-of-line then the TOAST pointer would be
inserted as-is into the array column.  If the source record for x is later
deleted, the array field value would become a dangling pointer, leading
to errors along the line of "missing chunk number 0 for toast value ..."
when the value is referenced.  A reproducible test case for this was
provided by Jan Pecek, but it seems likely that some of the "missing chunk
number" reports we've heard in the past were caused by similar issues.

Code-wise, the problem is that PG_DETOAST_DATUM() is not adequate to
produce a self-contained Datum value if the Datum is of composite type.
Seen in this light, the problem is not just confined to arrays and ranges,
but could also affect some other places where detoasting is done in that
way, for example form_index_tuple().

I tried teaching the array code to apply toast_flatten_tuple_attribute()
along with PG_DETOAST_DATUM() when the array element type is composite,
but this was messy and imposed extra cache lookup costs whether or not any
TOAST pointers were present, indeed sometimes when the array element type
isn't even composite (since sometimes it takes a typcache lookup to find
that out).  The idea of extending that approach to all the places that
currently use PG_DETOAST_DATUM() wasn't attractive at all.

This patch instead solves the problem by decreeing that composite Datum
values must not contain any out-of-line TOAST pointers in the first place;
that is, we expand out-of-line fields at the point of constructing a
composite Datum, not at the point where we're about to insert it into a
larger tuple.  This rule is applied only to true composite Datums, not
to tuples that are being passed around the system as tuples, so it's not
as invasive as it might sound at first.  With this approach, the amount
of code that has to be touched for a full solution is greatly reduced,
and added cache lookup costs are avoided except when there actually is
a TOAST pointer that needs to be inlined.

The main drawback of this approach is that we might sometimes dereference
a TOAST pointer that will never actually be used by the query, imposing a
rather large cost that wasn't there before.  On the other side of the coin,
if the field value is used multiple times then we'll come out ahead by
avoiding repeat detoastings.  Experimentation suggests that common SQL
coding patterns are unaffected either way, though.  Applications that are
very negatively affected could be advised to modify their code to not fetch
columns they won't be using.

In future, we might consider reverting this solution in favor of detoasting
only at the point where data is about to be stored to disk, using some
method that can drill down into multiple levels of nested structured types.
That will require defining new APIs for structured types, though, so it
doesn't seem feasible as a back-patchable fix.

Note that this patch changes HeapTupleGetDatum() from a macro to a function
call; this means that any third-party code using that macro will not get
protection against creating TOAST-pointer-containing Datums until it's
recompiled.  The same applies to any uses of PG_RETURN_HEAPTUPLEHEADER().
It seems likely that this is not a big problem in practice: most of the
tuple-returning functions in core and contrib produce outputs that could
not possibly be toasted anyway, and the same probably holds for third-party
extensions.

This bug has existed since TOAST was invented, so back-patch to all
supported branches.
2014-05-01 15:19:06 -04:00
Bruce Momjian 7e04792a1c Update copyright for 2014
Update all files in head, and files COPYRIGHT and legal.sgml in all back
branches.
2014-01-07 16:05:30 -05:00
Heikki Linnakangas dde6282500 Fix more instances of "the the" in comments.
Plus one instance of "to to" in the docs.
2013-12-13 20:02:01 +02:00
Tom Lane c7b849a896 Prevent leakage of cached plans and execution trees in plpgsql DO blocks.
plpgsql likes to cache query plans and simple-expression execution state
trees across calls.  This is a considerable win for multiple executions
of the same function.  However, it's useless for DO blocks, since by
definition those are executed only once and discarded.  Nonetheless,
we were allowing a DO block's expression execution trees to survive
until end of transaction, resulting in a significant intra-transaction
memory leak, as reported by Yeb Havinga.  Worse, if the DO block exited
with an error, the compiled form of the block's code was leaked till
end of session --- along with subsidiary plancache entries.

To fix, make DO blocks keep their expression execution trees in a private
EState that's deleted at exit from the block, and add a PG_TRY block
to plpgsql_inline_handler to make sure that memory cleanup happens
even on error exits.  Also add a regression test covering error handling
in a DO block, because my first try at this broke that.  (The test is
not meant to prove that we don't leak memory anymore, though it could
be used for that with a much larger loop count.)

Ideally we'd back-patch this into all versions supporting DO blocks;
but the patch needs to add a field to struct PLpgSQL_execstate, and that
would break ABI compatibility for third-party plugins such as the plpgsql
debugger.  Given the small number of complaints so far, fixing this in
HEAD only seems like an acceptable choice.
2013-11-15 13:52:03 -05:00
Peter Eisentraut 001e114b8d Fix whitespace issues found by git diff --check, add gitattributes
Set per file type attributes in .gitattributes to fine-tune whitespace
checks.  With the associated cleanups, the tree is now clean for git
2013-11-10 14:48:29 -05:00
Robert Haas 689746c045 plpgsql: Add new option print_strict_params.
This option provides more detailed error messages when STRICT is used
and the number of rows returned is not one.

Marko Tiikkaja, reviewed by Ian Lawrence Barwick
2013-10-07 15:38:49 -04:00
Tom Lane 3d13623d75 Prevent leakage of SPI tuple tables during subtransaction abort.
plpgsql often just remembers SPI-result tuple tables in local variables,
and has no mechanism for freeing them if an ereport(ERROR) causes an escape
out of the execution function whose local variable it is.  In the original
coding, that wasn't a problem because the tuple table would be cleaned up
when the function's SPI context went away during transaction abort.
However, once plpgsql grew the ability to trap exceptions, repeated
trapping of errors within a function could result in significant
intra-function-call memory leakage, as illustrated in bug #8279 from
Chad Wagner.

We could fix this locally in plpgsql with a bunch of PG_TRY/PG_CATCH
coding, but that would be tedious, probably slow, and prone to bugs of
omission; moreover it would do nothing for similar risks elsewhere.
What seems like a better plan is to make SPI itself responsible for
freeing tuple tables at subtransaction abort.  This patch attacks the
problem that way, keeping a list of live tuple tables within each SPI
function context.  Currently, such freeing is automatic for tuple tables
made within the failed subtransaction.  We might later add a SPI call to
mark a tuple table as not to be freed this way, allowing callers to opt
out; but until someone exhibits a clear use-case for such behavior, it
doesn't seem worth bothering.

A very useful side-effect of this change is that SPI_freetuptable() can
now defend itself against bad calls, such as duplicate free requests;
this should make things more robust in many places.  (In particular,
this reduces the risks involved if a third-party extension contains
now-redundant SPI_freetuptable() calls in error cleanup code.)

Even though the leakage problem is of long standing, it seems imprudent
to back-patch this into stable branches, since it does represent an API
semantics change for SPI users.  We'll patch this in 9.3, but live with
the leakage in older branches.
2013-07-25 16:46:14 -04:00
Stephen Frost 8312832567 Add GET DIAGNOSTICS ... PG_CONTEXT in PL/PgSQL
This adds the ability to get the call stack as a string from within a
PL/PgSQL function, which can be handy for logging to a table, or to
include in a useful message to an end-user.

Pavel Stehule, reviewed by Rushabh Lathia and rather heavily whacked
around by Stephen Frost.
2013-07-24 18:53:27 -04:00
Noah Misch 448fee2e23 Make comments reflect that omission of SPI_gettypmod() is intentional. 2013-07-12 18:07:46 -04:00
Noah Misch 7cd9b1371d Expose object name error fields in PL/pgSQL.
Specifically, permit attaching them to the error in RAISE and retrieving
them from a caught error in GET STACKED DIAGNOSTICS.  RAISE enforces
nothing about the content of the fields; for its purposes, they are just
additional string fields.  Consequently, clarify in the protocol and
libpq documentation that the usual relationships between error fields,
like a schema name appearing wherever a table name appears, are not
universal.  This freedom has other applications; consider a FDW
propagating an error from an RDBMS having no schema support.

Back-patch to 9.3, where core support for the error fields was
introduced.  This prevents the confusion of having a release where libpq
exposes the fields and PL/pgSQL does not.

Pavel Stehule, lexical revisions by Noah Misch.
2013-07-03 07:29:56 -04:00
Tom Lane fdaf44862b Invent pre-commit/pre-prepare/pre-subcommit events for xact callbacks.
Currently it's only possible for loadable modules to get control during
post-commit cleanup of a transaction.  That doesn't work too well if they
want to do something that could throw an error; for example, an FDW might
need to issue a remote commit, which could well fail.  To improve matters,
extend the existing APIs for XactCallback and SubXactCallback functions
to provide new pre-commit events for this purpose.

The release notes will need to mention that existing callback functions
should be checked to make sure they don't do something unwanted when one
of the new event types occurs.  In the examples within our source tree,
contrib/sepgsql was fine but plpgsql had been a bit too cute.
2013-02-14 20:35:08 -05:00
Tom Lane 0900ac2d0d Fix plpgsql's reporting of plan-time errors in possibly-simple expressions.
exec_simple_check_plan and exec_eval_simple_expr attempted to call
GetCachedPlan directly.  This meant that if an error was thrown during
planning, the resulting context traceback would not include the line
normally contributed by _SPI_error_callback.  This is already inconsistent,
but just to be really odd, a re-execution of the very same expression
*would* show the additional context line, because we'd already have cached
the plan and marked the expression as non-simple.

The problem is easy to demonstrate in 9.2 and HEAD because planning of a
cached plan doesn't occur at all until GetCachedPlan is done.  In earlier
versions, it could only be an issue if initial planning had succeeded, then
a replan was forced (already somewhat improbable for a simple expression),
and the replan attempt failed.  Since the issue is mainly cosmetic in older
branches anyway, it doesn't seem worth the risk of trying to fix it there.
It is worth fixing in 9.2 since the instability of the context printout can
affect the results of GET STACKED DIAGNOSTICS, as per a recent discussion
on pgsql-novice.

To fix, introduce a SPI function that wraps GetCachedPlan while installing
the correct callback function.  Use this instead of calling GetCachedPlan
directly from plpgsql.

Also introduce a wrapper function for extracting a SPI plan's
CachedPlanSource list.  This lets us stop including spi_priv.h in
pl_exec.c, which was never a very good idea from a modularity standpoint.

In passing, fix a similar inconsistency that could occur in SPI_cursor_open,
which was also calling GetCachedPlan without setting up a context callback.
2013-01-30 20:02:23 -05:00
Bruce Momjian bd61a623ac Update copyrights for 2013
Fully update git head, and update back branches in ./COPYRIGHT and
legal.sgml files.
2013-01-01 17:15:01 -05:00
Tom Lane 31a891857a Improve pl/pgsql to support composite-type expressions in RETURN.
For some reason lost in the mists of prehistory, RETURN was only coded to
allow a simple reference to a composite variable when the function's return
type is composite.  Allow an expression instead, while preserving the
efficiency of the original code path in the case where the expression is
indeed just a composite variable's name.  Likewise for RETURN NEXT.

As is true in various other places, the supplied expression must yield
exactly the number and data types of the required columns.  There was some
discussion of relaxing that for pl/pgsql, but no consensus yet, so this
patch doesn't address that.

Asif Rehman, reviewed by Pavel Stehule
2012-12-06 23:09:52 -05:00
Alvaro Herrera c219d9b0a5 Split tuple struct defs from htup.h to htup_details.h
This reduces unnecessary exposure of other headers through htup.h, which
is very widely included by many files.

I have chosen to move the function prototypes to the new file as well,
because that means htup.h no longer needs to include tupdesc.h.  In
itself this doesn't have much effect in indirect inclusion of tupdesc.h
throughout the tree, because it's also required by execnodes.h; but it's
something to explore in the future, and it seemed best to do the htup.h
change now while I'm busy with it.
2012-08-30 16:52:35 -04:00
Robert Haas 3a0e4d36eb Make new event trigger facility actually do something.
Commit 3855968f32 added syntax, pg_dump,
psql support, and documentation, but the triggers didn't actually fire.
With this commit, they now do.  This is still a pretty basic facility
overall because event triggers do not get a whole lot of information
about what the user is trying to do unless you write them in C; and
there's still no option to fire them anywhere except at the very
beginning of the execution sequence, but it's better than nothing,
and a good building block for future work.

Along the way, add a regression test for ALTER LARGE OBJECT, since
testing of event triggers reveals that we haven't got one.

Dimitri Fontaine and Robert Haas
2012-07-20 11:39:01 -04:00
Peter Eisentraut dd16f9480a Remove unreachable code
The Solaris Studio compiler warns about these instances, unlike more
mainstream compilers such as gcc.  But manual inspection showed that
the code is clearly not reachable, and we hope no worthy compiler will
complain about removing this code.
2012-07-16 22:15:03 +03:00
Bruce Momjian 927d61eeff Run pgindent on 9.2 source tree in preparation for first 9.3
commit-fest.
2012-06-10 15:20:04 -04:00
Joe Conway b58bacdacb PL/pgSQL RETURN NEXT was leaking converted tuples, causing
out of memory when looping through large numbers of rows.
Flag the converted tuples to be freed. Complaint and patch
by Joe.
2012-05-09 22:57:19 -07:00
Tom Lane 88a4cb30a4 Fix GET DIAGNOSTICS for case of assignment to function's first variable.
An incorrect and entirely unnecessary "safety check" in exec_stmt_getdiag()
caused the code to treat an assignment to a variable with dno zero as a
no-op.  Unfortunately, that's a perfectly valid dno.  This has been broken
since GET DIAGNOSTICS was invented.  It's not terribly surprising that the
bug went unnoticed for so long, since in most cases you probably wouldn't
use the function's first-created variable (normally its first parameter)
as a GET DIAGNOSTICS target.  Nonetheless, it's broken.  Per bug #6551
from Adam Buraczewski.
2012-03-22 14:13:17 -04:00
Tom Lane 9dbf2b7d75 Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements.  The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.

In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs.  Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.

Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.

Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn".  There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.

Andres Freund and Tom Lane
2012-03-19 21:38:12 -04:00
Tom Lane 58a9596ed4 Fix I/O-conversion-related memory leaks in plpgsql.
Datatype I/O functions are allowed to leak memory in CurrentMemoryContext,
since they are generally called in short-lived contexts.  However, plpgsql
calls such functions for purposes of type conversion, and was calling them
in its procedure context.  Therefore, any leaked memory would not be
recovered until the end of the plpgsql function.  If such a conversion
was done within a loop, quite a bit of memory could get consumed.  Fix by
calling such functions in the transient "eval_econtext", and adjust other
logic to match.  Back-patch to all supported versions.

Andres Freund, Jan Urbański, Tom Lane
2012-02-11 18:06:24 -05:00
Tom Lane bef47331b6 Code review for plpgsql fn_signature patch.
Don't quote the output of format_procedure(); it's already quoted quite
enough.  Remove the fn_name field, which was now just dead weight.  Fix
remaining expected-output files.
2012-02-01 02:14:37 -05:00
Heikki Linnakangas 4c6cedd1b0 Print function signature, not just name, in PL/pgSQL error messages.
This makes it unambiguous which function the message is coming from, if you
have overloaded functions.

Pavel Stehule, reviewed by Abhijit Menon-Sen.
2012-01-31 10:36:20 +02:00
Bruce Momjian e126958c2e Update copyright notices for year 2012. 2012-01-01 18:01:58 -05:00
Tom Lane 8722a1a06a Use the proper macro to convert a bool to a Datum.
The original coding was
	var->value = (Datum) state;
which is bogus, and then in commit 2f0f7b4bce
it was "corrected" to
	var->value = PointerGetDatum(state);
which is a faithful translation but still wrong.

This seems purely cosmetic, though, so no need for a back-patch.

Pavel Stehule
2011-11-27 12:57:11 -05:00
Tom Lane 051d1ba7a0 Avoid recursion while processing ELSIF lists in plpgsql.
The original implementation of ELSIF in plpgsql converted the construct
into nested simple IF statements.  This was prone to stack overflow with
long ELSIF lists, in two different ways.  First, it's difficult to generate
the parsetree without using right-recursion in the bison grammar, and
that's prone to parser stack overflow since nothing can be reduced until
the whole list has been read.  Second, we'd recurse during execution, thus
creating an unnecessary risk of execution-time stack overflow.  Rewrite
so that the ELSIF list is represented as a flat list, scanned via iteration
not recursion, and generated through left-recursion in the grammar.
Per a gripe from Håvard Kongsgård.
2011-10-27 15:21:51 -04:00
Tom Lane 16762b519c Speed up array element assignment in plpgsql by caching type information.
Cache assorted data in the PLpgSQL_arrayelem struct to avoid repetitive
catalog lookups over multiple executions of the same statement.

Pavel Stehule
2011-09-26 15:38:07 -04:00
Tom Lane faf5cee7f0 Fix another Assert issue exposed by CLOBBER_CACHE_ALWAYS.
plpgsql's exec_stmt_execsql was Assert'ing that a CachedPlanSource was
is_valid immediately after exec_prepare_plan.  The risk factor in this case
is that after building the prepared statement, exec_prepare_plan calls
exec_simple_check_plan, which might try to generate a generic plan --- and
with CLOBBER_CACHE_ALWAYS or other unusual causes of invalidation, that
could result in an invalidation.  However, that path could only be taken
for a SELECT query, for which we need not set mod_stmt.  So in this case
I think it's best to just remove the Assert; it's okay to look at a
slightly-stale querytree for what we need here.  Per buildfarm testing.
2011-09-18 23:46:04 -04:00
Tom Lane e6ed34f70d Ensure generic plan gets used for a plpgsql expression with no parameters.
Now that a NULL ParamListInfo pointer causes significantly different
behavior in plancache.c, be sure to pass it that way when the expression
is known not to reference any plpgsql variables.  Saves a few setup
cycles anyway.
2011-09-16 12:31:23 -04:00
Tom Lane e6faf910d7 Redesign the plancache mechanism for more flexibility and efficiency.
Rewrite plancache.c so that a "cached plan" (which is rather a misnomer
at this point) can support generation of custom, parameter-value-dependent
plans, and can make an intelligent choice between using custom plans and
the traditional generic-plan approach.  The specific choice algorithm
implemented here can probably be improved in future, but this commit is
all about getting the mechanism in place, not the policy.

In addition, restructure the API to greatly reduce the amount of extraneous
data copying needed.  The main compromise needed to make that possible was
to split the initial creation of a CachedPlanSource into two steps.  It's
worth noting in particular that SPI_saveplan is now deprecated in favor of
SPI_keepplan, which accomplishes the same end result with zero data
copying, and no need to then spend even more cycles throwing away the
original SPIPlan.  The risk of long-term memory leaks while manipulating
SPIPlans has also been greatly reduced.  Most of this improvement is based
on use of the recently-added MemoryContextSetParent primitive.
2011-09-16 00:43:52 -04:00
Tom Lane 3d4890c0c5 Add GET STACKED DIAGNOSTICS plpgsql command to retrieve exception info.
This is more SQL-spec-compliant, more easily extensible, and better
performing than the old method of inventing special variables.

Pavel Stehule, reviewed by Shigeru Hanada and David Wheeler
2011-07-18 14:47:18 -04:00
Tom Lane 1af37ec96d Replace errdetail("%s", ...) with errdetail_internal("%s", ...).
There may be some other places where we should use errdetail_internal,
but they'll have to be evaluated case-by-case.  This commit just hits
a bunch of places where invoking gettext is obviously a waste of cycles.
2011-07-16 14:22:18 -04:00
Alvaro Herrera b93f5a5673 Move Trigger and TriggerDesc structs out of rel.h into a new reltrigger.h
This lets us stop including rel.h into execnodes.h, which is a widely
used header.
2011-07-04 14:35:58 -04:00
Bruce Momjian 6560407c7d Pgindent run before 9.1 beta2. 2011-06-09 14:32:50 -04:00
Tom Lane 219dfae4e1 Make plpgsql provide the typmods for its variables to the main parser.
Historically we didn't do this, even though we had the information, because
plpgsql passed its Params via SPI APIs that only include type OIDs not
typmods.  Now that plpgsql uses parser callbacks to create Params, it's
easy to insert the right typmod.  This should generally result in lower
surprise factors, because a plpgsql variable that is declared with a typmod
will now work more like a table column with the same typmod.  In particular
it's the "right" way to fix bug #6020, in which plpgsql's attempt to return
an anonymous record type is defeated by stricter record-type matching
checks that were added in 9.0.  However, it's not impossible that this
could result in subtle behavioral changes that could break somebody's
existing plpgsql code, so I'm afraid to back-patch this change into
released branches.  In those branches we'll have to lobotomize the
record-type checks instead.
2011-05-22 15:15:07 -04:00
Peter Eisentraut bcf63a51e3 Message style improvements 2011-05-21 00:50:35 +03:00
Peter Eisentraut 5caa3479c2 Clean up most -Wunused-but-set-variable warnings from gcc 4.6
This warning is new in gcc 4.6 and part of -Wall.  This patch cleans
up most of the noise, but there are some still warnings that are
trickier to remove.
2011-04-11 22:28:45 +03:00