Commit Graph

1612 Commits

Author SHA1 Message Date
Peter Eisentraut 8b9e9644dc Replace AclObjectKind with ObjectType
AclObjectKind was basically just another enumeration for object types,
and we already have a preferred one for that.  It's only used in
aclcheck_error.  By using ObjectType instead, we can also give some more
precise error messages, for example "index" instead of "relation".

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-01-19 14:01:15 -05:00
Peter Eisentraut 70d6226e4f Use portal pinning in PL/Perl and PL/Python
PL/pgSQL "pins" internally generated portals so that user code cannot
close them by guessing their names.  Add this functionality to PL/Perl
and PL/Python as well, preventing users from manually closing cursors
created by spi_query and plpy.cursor, respectively.  (PL/Tcl does not
currently offer any cursor functionality.)
2018-01-10 17:10:32 -05:00
Peter Eisentraut b48b2f8793 Revert "Move portal pinning from PL/pgSQL to SPI"
This reverts commit b3617cdfbb.

This broke returning unnamed cursors from PL/pgSQL functions.
Apparently, there are no test cases for this.
2018-01-10 16:01:17 -05:00
Peter Eisentraut b3617cdfbb Move portal pinning from PL/pgSQL to SPI
PL/pgSQL "pins" internally generated (unnamed) portals so that user code
cannot close them by guessing their names.  This logic is also useful in
other languages and really for any code.  So move that logic into SPI.
An unnamed portal obtained through SPI_cursor_open() and related
functions is now automatically pinned, and SPI_cursor_close()
automatically unpins a portal that is pinned.

In the core distribution, this affects PL/Perl and PL/Python, preventing
users from manually closing cursors created by spi_query and
plpy.cursor, respectively.  (PL/Tcl does not currently offer any cursor
functionality.)

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-10 10:20:51 -05:00
Bruce Momjian 9d4649ca49 Update copyright for 2018
Backpatch-through: certain files through 9.3
2018-01-02 23:30:12 -05:00
Tom Lane 3e724aac74 Merge coding of return/exit/continue cases in plpgsql's loop statements.
plpgsql's five different loop control statements contained three distinct
implementations of the same (or what ought to be the same, at least)
logic for handling return/exit/continue result codes from their child
statements.  At best, that's trouble waiting to happen, and there seems
no very good reason for the coding to be so different.  Refactor so that
all the common logic is expressed in a single macro.

Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us
2017-12-31 17:20:17 -05:00
Tom Lane dd2243f2ad Improve regression tests' code coverage for plpgsql control structures.
I noticed that our code coverage report showed considerable deficiency
in test coverage for PL/pgSQL control statements.  Notably, both
exec_stmt_block and most of the loop control statements had very poor
coverage of handling of return/exit/continue result codes from their
child statements; and exec_stmt_fori was seriously lacking in feature
coverage, having no test that exercised its BY or REVERSE features,
nor verification that its overflow defenses work.

Now that we have some infrastructure for plpgsql-specific test scripts,
the natural thing to do is make a new script rather than further extend
plpgsql.sql.  So I created a new script plpgsql_control.sql with the
charter to test plpgsql control structures, and moved a few existing
tests there because they fell entirely under that charter.  I then
added new test cases that exercise the bits of code complained of above.

Of the five kinds of loop statements, only exec_stmt_while's result code
handling is fully exercised by these tests.  That would be a deficiency
as things stand, but a follow-on commit will merge the loop statements'
result code handling into one implementation.  So testing each usage of
that implementation separately seems redundant.

In passing, also add a couple test cases to plpgsql.sql to more fully
exercise plpgsql's code related to expanded arrays --- I had thought
that area was sufficiently covered already, but the coverage report
showed a couple of un-executed code paths.

Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us
2017-12-31 17:04:11 -05:00
Tom Lane 6719b238e8 Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit.
This patch does three interrelated things:

* Create a new expression execution step type EEOP_PARAM_CALLBACK
and add the infrastructure needed for add-on modules to generate that.
As discussed, the best control mechanism for that seems to be to add
another hook function to ParamListInfo, which will be called by
ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found.
For stand-alone expressions, we add a new entry point to allow the
ParamListInfo to be specified directly, since it can't be retrieved
from the parent plan node's EState.

* Redesign the API for the ParamListInfo paramFetch hook so that the
ParamExternData array can be entirely virtual.  This also lets us get rid
of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to
decide which param IDs should be accessible or not.  plpgsql_param_fetch
was already doing the identical masking check, so having callers do it too
seemed redundant.  While I was at it, I added a "speculative" flag to
paramFetch that the planner can specify as TRUE to avoid unwanted failures.
This solves an ancient problem for plpgsql that it couldn't provide values
of non-DTYPE_VAR variables to the planner for fear of triggering premature
"record not assigned yet" or "field not found" errors during planning.

* Rework plpgsql to get rid of the need for "unshared" parameter lists,
by dint of turning the single ParamListInfo per estate into a nearly
read-only data structure that doesn't instantiate any per-variable data.
Instead, the paramFetch hook controls access to per-variable data and can
make the right decisions on the fly, replacing the cases that we used to
need multiple ParamListInfos for.  This might perhaps have been a
performance loss on its own, but by using a paramCompile hook we can
bypass plpgsql_param_fetch entirely during normal query execution.
(It's now only called when, eg, we copy the ParamListInfo into a cursor
portal.  copyParamList() or SerializeParamList() effectively instantiate
the virtual parameter array as a simple physical array without a
paramFetch hook, which is what we want in those cases.)  This allows
reverting most of commit 6c82d8d1f, though I kept the cosmetic
code-consolidation aspects of that (eg the assign_simple_var function).

Performance testing shows this to be at worst a break-even change,
and it can provide wins ranging up to 20% in test cases involving
accesses to fields of "record" variables.  The fact that values of
such variables can now be exposed to the planner might produce wins
in some situations, too, but I've not pursued that angle.

In passing, remove the "parent" pointer from the arguments to
ExecInitExprRec and related functions, instead storing that pointer in a
transient field in ExprState.  The ParamListInfo pointer for a stand-alone
expression is handled the same way; we'd otherwise have had to add
yet another recursively-passed-down argument in expression compilation.

Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
2017-12-21 12:57:45 -05:00
Tom Lane c98c35cd08 Avoid putting build-location-dependent strings into generated files.
Various Perl scripts we use to generate files were in the habit of
printing things like "generated by $0" into their output files.
That looks like a fine idea at first glance, but it results in
non-reproducible output, because in VPATH builds $0 won't be just
the name of the script file, but a full path for it.  We'd prefer
that you get identical results whether using VPATH or not, so this
is a bad thing.

Some of these places also printed their input file name(s), causing
an additional hazard of the same type.

Hence, establish a policy that thou shalt not print $0, nor input file
pathnames, into output files (they're still allowed in error messages,
though).  Instead just write the script name verbatim.  While we are at
it, we can make these annotations more useful by giving the script's
full relative path name within the PG source tree, eg instead of
Gen_fmgrtab.pl let's print src/backend/utils/Gen_fmgrtab.pl.

Not all of the changes made here actually affect any files shipped
in finished tarballs today, but it seems best to apply the policy
everyplace so that nobody copies unsafe code into places where it
could matter.

Christoph Berg and Tom Lane

Discussion: https://postgr.es/m/20171215102223.GB31812@msg.df7cb.de
2017-12-21 10:57:06 -05:00
Tom Lane 997071691f Fix oversights in new plpgsql test suite infrastructure.
Fix a couple of minor oversights in commit 632b03da3: the tests
should be run in database "pl_regression" like the other PLs do,
and we should clean up the tests' output cruft during "make clean".
2017-12-16 11:32:49 -05:00
Tom Lane 9fa6f00b13 Rethink MemoryContext creation to improve performance.
This patch makes a number of interrelated changes to reduce the overhead
involved in creating/deleting memory contexts.  The key ideas are:

* Include the AllocSetContext header of an aset.c context in its first
malloc request, rather than allocating it separately in TopMemoryContext.
This means that we now always create an initial or "keeper" block in an
aset, even if it never receives any allocation requests.

* Create freelists in which we can save and recycle recently-destroyed
asets (this idea is due to Robert Haas).

* In the common case where the name of a context is a constant string,
just store a pointer to it in the context header, rather than copying
the string.

The first change eliminates a palloc/pfree cycle per context, and
also avoids bloat in TopMemoryContext, at the price that creating
a context now involves a malloc/free cycle even if the context never
receives any allocations.  That would be a loser for some common
usage patterns, but recycling short-lived contexts via the freelist
eliminates that pain.

Avoiding copying constant strings not only saves strlen() and strcpy()
overhead, but is an essential part of the freelist optimization because
it makes the context header size constant.  Currently we make no
attempt to use the freelist for contexts with non-constant names.
(Perhaps someday we'll need to think harder about that, but in current
usage, most contexts with custom names are long-lived anyway.)

The freelist management in this initial commit is pretty simplistic,
and we might want to refine it later --- but in common workloads that
will never matter because the freelists will never get full anyway.

To create a context with a non-constant name, one is now required to
call AllocSetContextCreateExtended and specify the MEMCONTEXT_COPY_NAME
option.  AllocSetContextCreate becomes a wrapper macro, and it includes
a test that will complain about non-string-literal context name
parameters on gcc and similar compilers.

An unfortunate side effect of making AllocSetContextCreate a macro is
that one is now *required* to use the size parameter abstraction macros
(ALLOCSET_DEFAULT_SIZES and friends) with it; the pre-9.6 habit of
writing out individual size parameters no longer works unless you
switch to AllocSetContextCreateExtended.

Internally to the memory-context-related modules, the context creation
APIs are simplified, removing the rather baroque original design whereby
a context-type module called mcxt.c which then called back into the
context-type module.  That saved a bit of code duplication, but not much,
and it prevented context-type modules from exercising control over the
allocation of context headers.

In passing, I converted the test-and-elog validation of aset size
parameters into Asserts to save a few more cycles.  The original thought
was that callers might compute size parameters on the fly, but in practice
nobody does that, so it's useless to expend cycles on checking those
numbers in production builds.

Also, mark the memory context method-pointer structs "const",
just for cleanliness.

Discussion: https://postgr.es/m/2264.1512870796@sss.pgh.pa.us
2017-12-13 13:55:16 -05:00
Peter Eisentraut 632b03da31 Start a separate test suite for plpgsql
The plpgsql.sql test file in the main regression tests is now by far the
largest after numeric_big, making editing and managing the test cases
very cumbersome.  The other PLs have their own test suites split up into
smaller files by topic.  It would be nice to have that for plpgsql as
well.  So, to get that started, set up test infrastructure in
src/pl/plpgsql/src/ and split out the recently added procedure test
cases into a new file there.  That file now mirrors the test cases added
to the other PLs, making managing those matching tests a bit easier too.

msvc build system changes with help from Michael Paquier
2017-12-13 11:02:29 -05:00
Peter Eisentraut 4c6744ed70 PL/Python: Fix potential NULL pointer dereference
After d0aa965c0a, one error path in
PLy_spi_execute_fetch_result() could result in the variable "result"
being dereferenced after being set to NULL.  Rearrange the code a bit to
fix that.

Also add another SPI_freetuptable() call so that that is cleared in all
error paths.

discovered by John Naylor <jcnaylor@gmail.com> via scan-build

ideas and review by Tom Lane
2017-12-12 20:52:14 -05:00
Tom Lane 390d58135b Fix plpgsql to reinitialize record variables at block re-entry.
If one exits and re-enters a DECLARE ... BEGIN ... END block within a
single execution of a plpgsql function, perhaps due to a surrounding loop,
the declared variables are supposed to get re-initialized to null (or
whatever their initializer is).  But this failed to happen for variables
of type "record", because while exec_stmt_block() expected such variables
to be included in the block's initvarnos list, plpgsql_add_initdatums()
only adds DTYPE_VAR variables to that list.  This bug appears to have
been there since the aboriginal addition of plpgsql to our tree.

Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables
as well.  (We don't need to consider other DTYPEs because they don't
represent separately-stored values.)  I failed to resist the temptation
to make some nearby cosmetic adjustments, too.

No back-patch, because there have not been field complaints, and it
seems possible that somewhere out there someone has code depending
on the incorrect behavior.  In any case this change would have no
impact on correctly-written code.

Discussion: https://postgr.es/m/22994.1512800671@sss.pgh.pa.us
2017-12-09 12:03:04 -05:00
Tom Lane dd759b96ea In plpgsql, unify duplicate variables for record and row cases.
plpgsql's function exec_move_row() handles assignment of a composite
source value to either a PLpgSQL_rec or PLpgSQL_row target variable.
Oddly, rather than taking a single target argument which it could do
run-time type detection on, it was coded to take two separate arguments
(only one of which is allowed to be non-NULL).  This choice had then
back-propagated into storing two separate target variables in various
plpgsql statement nodes, with lots of duplicative coding and awkward
interface logic to support that.  Simplify matters by folding those
pairs down to single variables, distinguishing the two cases only
where we must ... which turns out to be only in exec_move_row itself.
This is purely refactoring and should not change any behavior.

In passing, remove unused field PLpgSQL_stmt_open.returntype.

Discussion: https://postgr.es/m/11787.1512713374@sss.pgh.pa.us
2017-12-08 11:20:58 -05:00
Tom Lane a852cfe967 Fix uninitialized-variable compiler warning induced by commit e4128ee76.
I'm a little bit astonished that anyone's compiler would have failed to
complain about this.  The compiler surely does not know that is_procedure
means the function return value will be ignored.
2017-12-03 11:25:17 -05:00
Peter Eisentraut e4128ee767 SQL procedures
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar.  This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.

This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL).  There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql.  Support for defining
procedures is available in all the languages supplied by the core
distribution.

While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2017-11-30 11:03:20 -05:00
Peter Eisentraut c7f5c58e1c PL/Python: Fix remaining scan-build warnings
Apparently, scan-build thinks that proc->is_setof can change during
PLy_exec_function().  To make it clearer, save the value in a local
variable.

Also add an assertion to clear another warning.

Reviewed-by: John Naylor <jcnaylor@gmail.com>
2017-11-29 09:56:49 -05:00
Peter Eisentraut cdddd5d40b Add compiler hints to PLy_elog()
Decorate PLy_elog() in a similar way as elog(), to give compilers and
static analyzers hints in which cases it does not return.

Reviewed-by: John Naylor <jcnaylor@gmail.com>
2017-11-29 09:56:49 -05:00
Robert Haas eaedf0df71 Update typedefs.list and re-run pgindent
Discussion: http://postgr.es/m/CA+TgmoaA9=1RWKtBWpDaj+sF3Stgc8sHgf5z=KGtbjwPLQVDMA@mail.gmail.com
2017-11-29 09:24:24 -05:00
Peter Eisentraut 62546b4357 Revert "PL/Python: Fix potential NULL pointer dereference"
This reverts commit e42e2f3890.

It's not safe to return in the middle of a PG_TRY block, so this will
have to be done differently.
2017-11-28 13:55:39 -05:00
Peter Eisentraut e42e2f3890 PL/Python: Fix potential NULL pointer dereference
After d0aa965c0a, one error path in
PLy_spi_execute_fetch_result() could result in the variable "result"
being dereferenced after being set to NULL.  To fix that, just clear the
resources right there and return early.

Also add another SPI_freetuptable() call so that that is cleared in all
error paths.

discovered by John Naylor <jcnaylor@gmail.com> via scan-build
2017-11-28 11:28:05 -05:00
Noah Misch 84c4313c6f Support linking with MinGW-built Perl.
This is necessary for ActivePerl 5.18 onwards and for Strawberry Perl.
It is not sufficient for 32-bit builds with newer Visual Studio; these
fail with error LINK2026.  Back-patch to 9.3 (all supported versions).

Reported by Victor Wagner.

Discussion: https://postgr.es/m/20160326154321.7754ab8f@wagner.wagner.home
2017-11-23 20:22:04 -08:00
Peter Eisentraut d0aa965c0a Consistently catch errors from Python _New() functions
Python Py*_New() functions can fail and return NULL in out-of-memory
conditions.  The previous code handled that inconsistently or not at
all.  This change organizes that better.  If we are in a function that
is called from Python, we just check for failure and return NULL
ourselves, which will cause any exception information to be passed up.
If we are called from PostgreSQL, we consistently create an "out of
memory" error.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
2017-11-18 13:39:53 -05:00
Tom Lane 687f096ea9 Make PL/Python handle domain-type conversions correctly.
Fix PL/Python so that it can handle domains over composite, and so that
it enforces domain constraints correctly in other cases that were not
always done properly before.  Notably, it didn't do arrays of domains
right (oversight in commit c12d570fa), and it failed to enforce domain
constraints when returning a composite type containing a domain field,
and if a transform function is being used for a domain's base type then
it failed to enforce domain constraints on the result.  Also, in many
places it missed checking domain constraints on null values, because
the plpy_typeio code simply wasn't called for Py_None.

Rather than try to band-aid these problems, I made a significant
refactoring of the plpy_typeio logic.  The existing design of recursing
for array and composite members is extended to also treat domains as
containers requiring recursion, and the APIs for the module are cleaned
up and simplified.

The patch also modifies plpy_typeio to rely on the typcache more than
it did before (which was pretty much not at all).  This reduces the
need for repetitive lookups, and lets us get rid of an ad-hoc scheme
for detecting changes in composite types.  I added a couple of small
features to typcache to help with that.

Although some of this is fixing bugs that long predate v11, I don't
think we should risk a back-patch: it's a significant amount of code
churn, and there've been no complaints from the field about the bugs.

Tom Lane, reviewed by Anthony Bykov

Discussion: https://postgr.es/m/24449.1509393613@sss.pgh.pa.us
2017-11-16 16:23:04 -05:00
Peter Eisentraut 0e1539ba0d Add some const decorations to prototypes
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
2017-11-10 13:38:57 -05:00
Peter Eisentraut 2eb4a831e5 Change TRUE/FALSE to true/false
The lower case spellings are C and C++ standard and are used in most
parts of the PostgreSQL sources.  The upper case spellings are only used
in some files/modules.  So standardize on the standard spellings.

The APIs for ICU, Perl, and Windows define their own TRUE and FALSE, so
those are left as is when using those APIs.

In code comments, we use the lower-case spelling for the C concepts and
keep the upper-case spelling for the SQL concepts.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2017-11-08 11:37:28 -05:00
Tom Lane 60651e4cdd Support domains over composite types in PL/Perl.
In passing, don't insist on rsi->expectedDesc being set unless we
actually need it; this allows succeeding in a couple of cases where
PL/Perl functions returning setof composite would have failed before,
and makes the error message more apropos in other cases.

Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
2017-10-28 14:02:21 -04:00
Robert Haas 682ce911f8 Allow parallel query for prepared statements with generic plans.
This was always intended to work, but due to an oversight in
max_parallel_hazard_walker, it didn't.  In testing, we missed the
fact that it was only working for custom plans, where the parameter
value has been substituted for the parameter itself early enough
that everything worked.  In a generic plan, the Param node survives
and must be treated as parallel-safe.  SerializeParamList provides
for the transmission of parameter values to workers.

Amit Kapila with help from Kuntal Ghosh.  Some changes by me.

Discussion: http://postgr.es/m/CAA4eK1+_BuZrmVCeua5Eqnm4Co9DAXdM5HPAOE2J19ePbR912Q@mail.gmail.com
2017-10-27 22:22:39 +02:00
Tom Lane 820c0305f6 Support domains over composite types in PL/Tcl.
Since PL/Tcl does little with SQL types internally, this is just a
matter of making it work with composite-domain function arguments
and results.

In passing, make it allow RECORD-type arguments --- that's a trivial
change that nobody had bothered with up to now.

Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
2017-10-26 16:00:17 -04:00
Peter Eisentraut 0008a106d4 Use Py_RETURN_NONE where suitable
This is more idiomatic style and available as of Python 2.4, which is
our minimum.
2017-09-29 16:51:39 -04:00
Peter Eisentraut 65c8656202 Fix plperl build
The changes in 639928c988 turned out to
require Perl 5.9.3, which is newer than our minimum required version.
So revert back to the old code for the normal case and only use the new
variant when both coverage and vpath are used.  As the minimum Perl
version moves forward, we can drop the old code sometime.
2017-09-27 15:51:04 -04:00
Peter Eisentraut 639928c988 Improve vpath support in plperl build
Run xsubpp with the -output option instead of redirecting stdout.  That
ensures that the #line directives in the output file point to the right
place in a vpath build.  This in turn fixes an error in coverage builds
that it can't find the source files.

Refactor the makefile rules while we're here.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2017-09-27 11:51:21 -04:00
Tom Lane b8060e41b5 Prefer argument name over "$n" for the refname of a plpgsql argument.
If a function argument has a name, use that as the "refname" of the
PLpgSQL_datum representing the argument, instead of $n as before.
This allows better error messages in some cases.

Pavel Stehule, reviewed by Jeevan Chalke

Discussion: https://postgr.es/m/CAFj8pRB9GyU2U1Sb2ssgP26DZ_yq-FYDfpvUvGQ=k4R=yOPVjg@mail.gmail.com
2017-09-11 16:24:43 -04:00
Andres Freund 2cd7084524 Change tupledesc->attrs[n] to TupleDescAttr(tupledesc, n).
This is a mechanical change in preparation for a later commit that
will change the layout of TupleDesc.  Introducing a macro to abstract
the details of where attributes are stored will allow us to change
that in separate step and revise it in future.

Author: Thomas Munro, editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com
2017-08-20 11:19:07 -07:00
Tom Lane b73f1b5c29 Make simpler-simple-expressions code cope with a Gather plan.
Commit 00418c612 expected that the plan generated for a simple-expression
query would always be a plain Result node.  However, if force_parallel_mode
is on, the planner might stick a Gather atop that.  Cope by looking through
the Gather.  For safety, assert that the Gather's tlist is trivial.

Per buildfarm.

Discussion: https://postgr.es/m/23425.1502822098@sss.pgh.pa.us
2017-08-15 16:49:52 -04:00
Tom Lane 00418c6124 Simplify plpgsql's check for simple expressions.
plpgsql wants to recognize expressions that it can execute directly
via ExecEvalExpr() instead of going through the full SPI machinery.
Originally the test for this consisted of recursively groveling through
the post-planning expression tree to see if it contained only nodes that
plpgsql recognized as safe.  That was a major maintenance headache, since
it required updating plpgsql every time we added any kind of expression
node.  It was also kind of expensive, so over time we added various
pre-planning checks to try to short-circuit having to do that.
Robert Haas pointed out that as of the SRF-processing changes in v10,
particularly the addition of Query.hasTargetSRFs, there really isn't
any reason to make the recursive scan at all: the initial checks cover
everything we really care about.  We do have to make sure that those
checks agree with what inline_function() considers, so that inlining
of a function that formerly wasn't inlined can't cause an expression
considered simple to become non-simple.

Hence, delete the recursive function exec_simple_check_node(), and tweak
those other tests to more exactly agree with inline_function().  Adjust
some comments and function naming to match.

Discussion: https://postgr.es/m/CA+TgmoZGZpwdEV2FQWaVxA_qZXsQE1DAS5Fu8fwxXDNvfndiUQ@mail.gmail.com
2017-08-15 12:28:39 -04:00
Peter Eisentraut f7668b2b35 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 1a0b5e655d7871506c2b1c7ba562c2de6b6a55de
2017-08-07 13:55:34 -04:00
Peter Eisentraut 26d40ada3f Message style improvements 2017-08-04 18:31:32 -04:00
Tom Lane b4cc35fbb7 Tighten coding for non-composite case in plperl's return_next.
Coverity complained about this code's practice of using scalar variables
as single-element arrays.  While that's really just nitpicking, it probably
is more readable to declare them as arrays, so let's do that.  A more
important point is that the code was just blithely assuming that the
result tupledesc has exactly one column; if it doesn't, we'd likely get
a crash of some sort in tuplestore_putvalues.  Since the tupledesc is
manufactured outside of plperl, that seems like an uncomfortably long
chain of assumptions.  We can nail it down at little cost with a sanity
check earlier in the function.
2017-07-31 11:33:46 -04:00
Tom Lane 3c163a7fc7 PL/Perl portability fix: absorb relevant -D switches from Perl.
The Perl documentation is very clear that stuff calling libperl should
be built with the compiler switches shown by Perl's $Config{ccflags}.
We'd been ignoring that up to now, and mostly getting away with it,
but recent Perl versions contain ABI compatibility cross-checks that
fail on some builds because of this omission.  In particular the
sizeof(PerlInterpreter) can come out different due to some fields being
added or removed; which means we have a live ABI hazard that we'd better
fix rather than continuing to sweep it under the rug.

However, it still seems like a bad idea to just absorb $Config{ccflags}
verbatim.  In some environments Perl was built with a different compiler
that doesn't even use the same switch syntax.  -D switch syntax is pretty
universal though, and absorbing Perl's -D switches really ought to be
enough to fix the problem.

Furthermore, Perl likes to inject stuff like -D_LARGEFILE_SOURCE and
-D_FILE_OFFSET_BITS=64 into $Config{ccflags}, which affect libc ABIs on
platforms where they're relevant.  Adopting those seems dangerous too.
It's unclear whether a build wherein Perl and Postgres have different ideas
of sizeof(off_t) etc would work, or whether anyone would care about making
it work.  But it's dead certain that having different stdio ABIs in
core Postgres and PL/Perl will not work; we've seen that movie before.
Therefore, let's also ignore -D switches for symbols beginning with
underscore.  The symbols that we actually need to import should be the ones
mentioned in perl.h's PL_bincompat_options stanza, and none of those start
with underscore, so this seems likely to work.  (If it turns out not to
work everywhere, we could consider intersecting the symbols mentioned in
PL_bincompat_options with the -D switches.  But that will be much more
complicated, so let's try this way first.)

This will need to be back-patched, but first let's see what the
buildfarm makes of it.

Ashutosh Sharma, some adjustments by me

Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
2017-07-28 14:25:28 -04:00
Tom Lane bebe174bb4 PL/Perl portability fix: avoid including XSUB.h in plperl.c.
In Perl builds that define PERL_IMPLICIT_SYS, XSUB.h defines macros
that replace a whole lot of basic libc functions with Perl functions.
We can't tolerate that in plperl.c; it breaks at least PG_TRY and
probably other stuff.  The core idea of this patch is to include XSUB.h
only in the .xs files where it's really needed, and to move any code
broken by PERL_IMPLICIT_SYS out of the .xs files and into plperl.c.

The reason this hasn't been a problem before is that our build techniques
did not result in PERL_IMPLICIT_SYS appearing as a #define in PL/Perl,
even on some platforms where Perl thinks it is defined.  That's about to
change in order to fix a nasty portability issue, so we need this work to
make the code safe for that.

Rather unaccountably, the Perl people chose XSUB.h as the place to provide
the versions of the aTHX/aTHX_ macros that are needed by code that's not
explicitly aware of the MULTIPLICITY API conventions.  Hence, just removing
XSUB.h from plperl.c fails miserably.  But we can work around that by
defining PERL_NO_GET_CONTEXT (which would make the relevant stanza of
XSUB.h a no-op anyway).  As explained in perlguts.pod, that means we need
to add a "dTHX" macro call in every C function that calls a Perl API
function.  In most of them we just add this at the top; but since the
macro fetches the current Perl interpreter pointer, more care is needed
in functions that switch the active interpreter.  Lack of the macro is
easily recognized since it results in bleats about "my_perl" not being
defined.

(A nice side benefit of this is that it significantly reduces the number
of fetches of the current interpreter pointer.  On my machine, plperl.so
gets more than 10% smaller, and there's probably some performance win too.
We could reduce the number of fetches still more by decorating the code
with pTHX_/aTHX_ macros to pass the interpreter pointer around, as
explained by perlguts.pod; but that's a task for another day.)

Formatting note: pgindent seems happy to treat "dTHX;" as a declaration
so long as it's the first thing after the left brace, as we'd already
observed with respect to the similar macro "dSP;".  If you try to put
it later in a set of declarations, pgindent puts ugly extra space
around it.

Having removed XSUB.h from plperl.c, we need only move the support
functions for spi_return_next and util_elog (both of which use PG_TRY)
out of the .xs files and into plperl.c.  This seems sufficient to
avoid the known problems caused by PERL_IMPLICIT_SYS, although we
could move more code if additional issues emerge.

This will need to be back-patched, but first let's see what the
buildfarm makes of it.

Patch by me, with some help from Ashutosh Sharma

Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
2017-07-28 12:25:43 -04:00
Alvaro Herrera 6c774caf0e Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: c5a8de3653bb1af6b0eb41cc6bf090c5522df52b
2017-07-10 11:53:55 -04:00
Peter Eisentraut 898d24ae2a PL/Python: Fix hint about returning composite type from Python
('foo') is not a Python tuple: it is a string wrapped in parentheses.  A
valid 1-element Python tuple is ('foo',).

Author: Daniele Varrazzo <daniele.varrazzo@gmail.com>
2017-06-30 16:51:14 -04:00
Tom Lane 382ceffdf7 Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.

By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis.  However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent.  That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.

This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:35:54 -04:00
Tom Lane c7b8998ebb Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.

Commit e3860ffa4d wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code.  The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there.  BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs.  So the
net result is that in about half the cases, such comments are placed
one tab stop left of before.  This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.

Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:19:25 -04:00
Tom Lane e3860ffa4d Initial pgindent run with pg_bsd_indent version 2.0.
The new indent version includes numerous fixes thanks to Piotr Stefaniak.
The main changes visible in this commit are:

* Nicer formatting of function-pointer declarations.
* No longer unexpectedly removes spaces in expressions using casts,
  sizeof, or offsetof.
* No longer wants to add a space in "struct structname *varname", as
  well as some similar cases for const- or volatile-qualified pointers.
* Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
* Fixes bug where comments following declarations were sometimes placed
  with no space separating them from the code.
* Fixes some odd decisions for comments following case labels.
* Fixes some cases where comments following code were indented to less
  than the expected column 33.

On the less good side, it now tends to put more whitespace around typedef
names that are not listed in typedefs.list.  This might encourage us to
put more effort into typedef name collection; it's not really a bug in
indent itself.

There are more changes coming after this round, having to do with comment
indentation and alignment of lines appearing within parentheses.  I wanted
to limit the size of the diffs to something that could be reviewed without
one's eyes completely glazing over, so it seemed better to split up the
changes as much as practical.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 14:39:04 -04:00
Dean Rasheed d3c3f2b1e2 Teach PL/pgSQL about partitioned tables.
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update a couple of places in
PL/pgSQL to handle it. Specifically plpgsql_parse_cwordtype() and
build_row_from_class() needed updating in order to make table%ROWTYPE
and table.col%TYPE work for partitioned tables.

Dean Rasheed, reviewed by Amit Langote.

Discussion: https://postgr.es/m/CAEZATCUnNOKN8sLML9jUzxecALWpEXK3a3W7y0PgFR4%2Buhgc%3Dg%40mail.gmail.com
2017-06-14 09:00:01 +01:00
Alvaro Herrera 08aa550694 Update expected file
Missed in ea3e310e71.
2017-05-25 14:41:43 -04:00
Alvaro Herrera ea3e310e71 Fix message case 2017-05-25 13:16:00 -04:00
Bruce Momjian ce55481032 Post-PG 10 beta1 pgperltidy run 2017-05-17 19:01:23 -04:00
Bruce Momjian a6fd7b7a5f Post-PG 10 beta1 pgindent run
perltidy run not included.
2017-05-17 16:31:56 -04:00
Peter Eisentraut 82d24bab75 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 398beeef4921df0956f917becd7b5669d2a8a5c4
2017-05-15 12:19:54 -04:00
Tom Lane 8f0530f580 Improve castNode notation by introducing list-extraction-specific variants.
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type.  For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))".  Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.

As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.

Patch by me, after an idea of Andrew Gierth's.

Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
2017-04-10 13:51:53 -04:00
Peter Eisentraut afd79873a0 Capitalize names of PLs consistently
Author: Daniel Gustafsson <daniel@yesql.se>
2017-04-05 00:38:25 -04:00
Kevin Grittner 5ebeb579b9 Follow-on cleanup for the transition table patch.
Commit 59702716 added transition table support to PL/pgsql so that
SQL queries in trigger functions could access those transient
tables.  In order to provide the same level of support for PL/perl,
PL/python and PL/tcl, refactor the relevant code into a new
function SPI_register_trigger_data.  Call the new function in the
trigger handler of all four PLs, and document it as a public SPI
function so that authors of out-of-tree PLs can do the same.

Also get rid of a second QueryEnvironment object that was
maintained by PL/pgsql.  That was previously used to deal with
cursors, but the same approach wasn't appropriate for PLs that are
less tangled up with core code.  Instead, have SPI_cursor_open
install the connection's current QueryEnvironment, as already
happens for SPI_execute_plan.

While in the docs, remove the note that transition tables were only
supported in C and PL/pgSQL triggers, and correct some ommissions.

Thomas Munro with some work by Kevin Grittner (mostly docs)
2017-04-04 18:36:39 -05:00
Kevin Grittner 5970271632 Add transition table support to plpgsql.
Kevin Grittner and Thomas Munro
Reviewed by Heikki Linnakangas, David Fetter, and Thomas Munro
with valuable comments and suggestions from many others
2017-03-31 23:30:08 -05:00
Peter Eisentraut 70ec3f1f8f PL/Python: Add cursor and execute methods to plan object
Instead of

    plan = plpy.prepare(...)
    res = plpy.execute(plan, ...)

you can now write

    plan = plpy.prepare(...)
    res = plan.execute(...)

or even

    res = plpy.prepare(...).execute(...)

and similarly for the cursor() method.

This is more in object oriented style, and makes the hybrid nature of
the existing execute() function less confusing.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2017-03-27 11:37:22 -04:00
Peter Eisentraut facde2a98f Clean up Perl code according to perlcritic
Fix all perlcritic warnings of severity level 5, except in
src/backend/utils/Gen_dummy_probes.pl, which is automatically generated.

Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2017-03-27 08:18:22 -04:00
Tom Lane 244dd95ce9 Update some obsolete comments.
Fix a few stray references to expression eval functions that don't
exist anymore or don't take the same input representation they used to.
2017-03-26 11:36:46 -04:00
Andres Freund b8d7f053c5 Faster expression evaluation and targetlist projection.
This replaces the old, recursive tree-walk based evaluation, with
non-recursive, opcode dispatch based, expression evaluation.
Projection is now implemented as part of expression evaluation.

This both leads to significant performance improvements, and makes
future just-in-time compilation of expressions easier.

The speed gains primarily come from:
- non-recursive implementation reduces stack usage / overhead
- simple sub-expressions are implemented with a single jump, without
  function calls
- sharing some state between different sub-expressions
- reduced amount of indirect/hard to predict memory accesses by laying
  out operation metadata sequentially; including the avoidance of
  nearly all of the previously used linked lists
- more code has been moved to expression initialization, avoiding
  constant re-checks at evaluation time

Future just-in-time compilation (JIT) has become easier, as
demonstrated by released patches intended to be merged in a later
release, for primarily two reasons: Firstly, due to a stricter split
between expression initialization and evaluation, less code has to be
handled by the JIT. Secondly, due to the non-recursive nature of the
generated "instructions", less performance-critical code-paths can
easily be shared between interpreted and compiled evaluation.

The new framework allows for significant future optimizations. E.g.:
- basic infrastructure for to later reduce the per executor-startup
  overhead of expression evaluation, by caching state in prepared
  statements.  That'd be helpful in OLTPish scenarios where
  initialization overhead is measurable.
- optimizing the generated "code". A number of proposals for potential
  work has already been made.
- optimizing the interpreter. Similarly a number of proposals have
  been made here too.

The move of logic into the expression initialization step leads to some
backward-incompatible changes:
- Function permission checks are now done during expression
  initialization, whereas previously they were done during
  execution. In edge cases this can lead to errors being raised that
  previously wouldn't have been, e.g. a NULL array being coerced to a
  different array type previously didn't perform checks.
- The set of domain constraints to be checked, is now evaluated once
  during expression initialization, previously it was re-built
  every time a domain check was evaluated. For normal queries this
  doesn't change much, but e.g. for plpgsql functions, which caches
  ExprStates, the old set could stick around longer.  The behavior
  around might still change.

Author: Andres Freund, with significant changes by Tom Lane,
	changes by Heikki Linnakangas
Reviewed-By: Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
2017-03-25 14:52:06 -07:00
Robert Haas 61c2e1a95f Improve access to parallel query from procedural languages.
In SQL, the ability to use parallel query was previous contingent on
fcache->readonly_func, which is only set for non-volatile functions;
but the volatility of a function has no bearing on whether queries
inside it can use parallelism.  Remove that condition.

SPI_execute and SPI_execute_with_args always run the plan just once,
though not necessarily to completion.  Given the changes in commit
691b8d5928, it's sensible to pass
CURSOR_OPT_PARALLEL_OK here, so do that.  This improves access to
parallelism for any caller that uses these functions to execute
queries.  Such callers include plperl, plpython, pltcl, and plpgsql,
though it's not the case that they all use these functions
exclusively.

In plpgsql, allow parallel query for plain SELECT queries (as
opposed to PERFORM, which already worked) and for plain expressions
(which probably won't go through the executor at all, because they
will likely be simple expressions, but if they do then this helps).

Rafia Sabih and Robert Haas, reviewed by Dilip Kumar and Amit Kapila

Discussion: http://postgr.es/m/CAOGQiiMfJ+4SQwgG=6CVHWoisiU0+7jtXSuiyXBM3y=A=eJzmg@mail.gmail.com
2017-03-24 14:46:33 -04:00
Robert Haas f120b614e0 plpgsql: Don't generate parallel plans for RETURN QUERY.
Commit 7aea8e4f2d allowed a parallel
plan to be generated when for a RETURN QUERY or RETURN QUERY EXECUTE
statement in a PL/pgsql block, but that's a bad idea because plplgsql
asks the executor for 50 rows at a time.  That means that we'll always
be running serially a plan that was intended for parallel execution,
which is not a good idea.  Fix by not requesting a parallel plan from
the outset.

Per discussion, back-patch to 9.6.  There is a slight risk that, due
to optimizer error, somebody could have a case where the parallel plan
executed serially is actually faster than the supposedly-best serial
plan, but the consensus seems to be that that's not sufficient
justification for leaving 9.6 unpatched.

Discussion: http://postgr.es/m/CA+TgmoZ_ZuH+auEeeWnmtorPsgc_SmP+XWbDsJ+cWvWBSjNwDQ@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
2017-03-24 12:30:39 -04:00
Peter Eisentraut f97a028d8e Spelling fixes in code comments
From: Josh Soref <jsoref@gmail.com>
2017-03-14 12:58:39 -04:00
Noah Misch 3a0d473192 Use wrappers of PG_DETOAST_DATUM_PACKED() more.
This makes almost all core code follow the policy introduced in the
previous commit.  Specific decisions:

- Text search support functions with char* and length arguments, such as
  prsstart and lexize, may receive unaligned strings.  I doubt
  maintainers of non-core text search code will notice.

- Use plain VARDATA() on values detoasted or synthesized earlier in the
  same function.  Use VARDATA_ANY() on varlenas sourced outside the
  function, even if they happen to always have four-byte headers.  As an
  exception, retain the universal practice of using VARDATA() on return
  values of SendFunctionCall().

- Retain PG_GETARG_BYTEA_P() in pageinspect.  (Page images are too large
  for a one-byte header, so this misses no optimization.)  Sites that do
  not call get_page_from_raw() typically need the four-byte alignment.

- For now, do not change btree_gist.  Its use of four-byte headers in
  memory is partly entangled with storage of 4-byte headers inside
  GBT_VARKEY, on disk.

- For now, do not change gtrgm_consistent() or gtrgm_distance().  They
  incorporate the varlena header into a cache, and there are multiple
  credible implementation strategies to consider.
2017-03-12 19:35:34 -04:00
Tom Lane b58fd4a9ca Add a "subtransaction" command to PL/Tcl.
This allows rolling back the effects of some SPI commands without
having to fail the entire PL/Tcl function.

Victor Wagner, reviewed by Pavel Stehule

Discussion: https://postgr.es/m/20170108205750.2dab04a1@wagner.wagner.home
2017-03-11 14:37:05 -05:00
Tom Lane 08da52859a Bring plpgsql into line with header inclusion policy.
We have a project policy that every .c file should start by including
postgres.h, postgres_fe.h, or c.h as appropriate; and then there is no
need for any .h file to explicitly include any of these.  (The core
reason for this policy is to make it easy to verify that pg_config_os.h
is included before any system headers such as <stdio.h>; without that,
we have portability issues on some platforms due to variation in largefile
options across different modules in the backend.  Also, if .h files were
responsible for choosing which of these key headers to include, .h files
that need to be includable in either frontend or backend compiles would be
in trouble.)

plpgsql was blithely ignoring this policy, so whack it upside the head
until it complies.  I also chose to standardize on including plpgsql's
own .h files after all core-system headers that it pulls in.  That
could've been done either way, but this way seems saner.

Discussion: https://postgr.es/m/CAEepm=2zCoeq3QxVwhS5DFeUh=yU6z81pbWMgfOB8OzyiBwxzw@mail.gmail.com
Discussion: https://postgr.es/m/11634.1488932128@sss.pgh.pa.us
2017-03-08 17:21:08 -05:00
Tom Lane 0d2b1f305d Invent start_proc parameters for PL/Tcl.
Define GUCs pltcl.start_proc and pltclu.start_proc.  When set to a
nonempty value at the time a new Tcl interpreter is created, the
parameterless pltcl or pltclu function named by the GUC is called to
allow user-controlled initialization to occur within the interpreter.
This is modeled on plv8's start_proc parameter, and also has much in
common with plperl's on_init feature.  It allows users to fully
replace the "modules" feature that was removed in commit 817f2a586.

Since an initializer function could subvert later Tcl code in nearly
arbitrary ways, mark both GUCs as SUSET for now.  It would be nice
to find a way to relax that someday; but the corresponding GUCs in
plperl are also SUSET, and there's not been much complaint.

Discussion: https://postgr.es/m/22067.1488046447@sss.pgh.pa.us
2017-03-07 12:40:44 -05:00
Tom Lane 817f2a5863 Remove PL/Tcl's "module" facility.
PL/Tcl has long had a facility whereby Tcl code could be autoloaded from
a database table named "pltcl_modules".  However, nobody is using it, as
evidenced by the recent discovery that it's never been fixed to work with
standard_conforming_strings turned on.  Moreover, it's rather shaky from
a security standpoint, and the table design is very old and crufty (partly
because it dates from before we had TOAST).  A final problem is that
because the table-population scripts depend on the Tcl client library
Pgtcl, which we removed from the core distribution in 2004, it's
impossible to create a self-contained regression test for the feature.
Rather than try to surmount these problems, let's just remove it.

A follow-on patch will provide a way to execute user-defined
initialization code, similar to features that exist in plperl and plv8.
With that, it will be possible to implement this feature or similar ones
entirely in userspace, which is where it belongs.

Discussion: https://postgr.es/m/22067.1488046447@sss.pgh.pa.us
2017-02-27 11:20:22 -05:00
Tom Lane 9e3755ecb2 Remove useless duplicate inclusions of system header files.
c.h #includes a number of core libc header files, such as <stdio.h>.
There's no point in re-including these after having read postgres.h,
postgres_fe.h, or c.h; so remove code that did so.

While at it, also fix some places that were ignoring our standard pattern
of "include postgres[_fe].h, then system header files, then other Postgres
header files".  While there's not any great magic in doing it that way
rather than system headers last, it's silly to have just a few files
deviating from the general pattern.  (But I didn't attempt to enforce this
globally, only in files I was touching anyway.)

I'd be the first to say that this is mostly compulsive neatnik-ism,
but over time it might save enough compile cycles to be useful.
2017-02-25 16:12:55 -05:00
Peter Eisentraut 38d103763d Make more use of castNode() 2017-02-21 11:59:09 -05:00
Peter Eisentraut 04aad40186 Drop support for Python 2.3
There is no specific reason for this right now, but keeping support for
old Python versions around indefinitely increases the maintenance
burden.  The oldest supported Python version is now Python 2.4, which is
still shipped in RHEL/CentOS 5 by default.

In configure, add a check for the required Python version and give a
friendly error message for an old version, instead of relying on an
obscure build error later on.
2017-02-21 09:49:22 -05:00
Tom Lane 170511b30d Adjust PL/Tcl regression test to dodge a possible bug or zone dependency.
One case in the PL/Tcl tests is observed to fail on RHEL5 with a Turkish
time zone setting.  It's not clear if this is an old Tcl bug or something
odd about the zone data, but in any case that test is meant to see if the
Tcl [clock] command works at all, not what its corner-case behaviors are.
Therefore we have no need to test exactly which week a Sunday midnight is
considered to fall into.  Probe the following Tuesday instead.

Discussion: https://postgr.es/m/797.1487517822@sss.pgh.pa.us
2017-02-19 16:14:52 -05:00
Heikki Linnakangas 181bdb90ba Fix typos in comments.
Backpatch to all supported versions, where applicable, to make backpatching
of future fixes go more smoothly.

Josh Soref

Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
2017-02-06 11:33:58 +02:00
Tom Lane 7afd56c3c6 Use castNode() in a bunch of statement-list-related code.
When I wrote commit ab1f0c822, I really missed the castNode() macro that
Peter E. had proposed shortly before.  This back-fills the uses I would
have put it to.  It's probably not all that significant, but there are
more assertions here than there were before, and conceivably they will
help catch any bugs associated with those representation changes.

I left behind a number of usages like "(Query *) copyObject(query_var)".
Those could have been converted as well, but Peter has proposed another
notational improvement that would handle copyObject cases automatically,
so I let that be for now.
2017-01-26 22:09:34 -05:00
Tom Lane c0ef456b97 Volatile-ize some plperl variables that must survive into PG_CATCH blocks.
This appears to be necessary to fix a failure seen on buildfarm member
sittella.  It shouldn't be necessary according to the letter of the C
standard, because we don't change the values of these variables within
the PG_TRY blocks; but somehow gcc 4.7.2 is dropping the ball.

Discussion: https://postgr.es/m/17555.1485179975@sss.pgh.pa.us
2017-01-23 09:15:49 -05:00
Peter Eisentraut f21a563d25 Move some things from builtins.h to new header files
This avoids that builtins.h has to include additional header files.
2017-01-20 20:29:53 -05:00
Andres Freund ea15e18677 Remove obsoleted code relating to targetlist SRF evaluation.
Since 69f4b9c plain expression evaluation (and thus normal projection)
can't return sets of tuples anymore. Thus remove code dealing with
that possibility.

This will require adjustments in external code using
ExecEvalExpr()/ExecProject() - that should neither be hard nor very
common.

Author: Andres Freund and Tom Lane
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
2017-01-19 14:40:41 -08:00
Andres Freund 8b07aee8c5 Adapt python regression tests to 69f4b9c85f.
Hopefully this'll unbreak the buildfarm.
2017-01-18 16:11:19 -08:00
Tom Lane ab1f0c8225 Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements.  It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list.  This patch brings
similar consistency to the outputs of raw parsing and planning steps:

* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.

* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements.  In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node.  This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.

Now, every list of statements has a consistent head-node type depending
on how far along it is in processing.  This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.

Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc.  That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way.  It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)

Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list.  While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.

The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement.  This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)

There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.

Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes.  This allows
more intelligent handling of cases where a source query string contains
multiple statements.  This patch doesn't actually do anything with the
information, but a follow-on patch will.  (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)

catversion bump because addition of location fields to struct Query
affects stored rules.

This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.

Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 16:02:35 -05:00
Tom Lane 5b29e6b688 In PL/Tcl tests, don't choke if optional error fields are missing.
This fixes a portability issue introduced by commit 961bed020: with a
compiler that doesn't support PG_FUNCNAME_MACRO, the "funcname" field of
errorCode won't be provided, leading to a failure of the unset command.
I added -nocomplain to the unset commands for filename and lineno too, just
in case, though I know of no platform that wouldn't populate those fields.
(BTW, -nocomplain is new in Tcl 8.4, but fortunately we dropped support
for pre-8.4 Tcl some time ago.)

Per buildfarm member pademelon.
2017-01-13 16:59:52 -05:00
Tom Lane 8c5722948e Fix error handling in pltcl_returnnext.
We can't throw elog(ERROR) out of a Tcl command procedure; we have
to catch the error and return TCL_ERROR to the Tcl interpreter.
pltcl_returnnext failed to meet this requirement, so that errors
detected by pltcl_build_tuple_result or other functions called here
led to longjmp'ing out of the Tcl interpreter and thereby leaving it
in a bad state.  Use the existing subtransaction support to prevent
that.  Oversight in commit 26abb50c4, found more or less accidentally
by the buildfarm thanks to the tests added in 961bed020.

Report: https://postgr.es/m/30647.1483989734@sss.pgh.pa.us
2017-01-09 17:47:10 -05:00
Tom Lane 961bed0208 Expand the regression tests for PL/Tcl.
This raises the test coverage (by line count) in pltcl.c from about 70%
to 86%.

Karl Lehenbauer and Jim Nasby

Discussion: https://postgr.es/m/92a1670d-21b6-8f03-9c13-e4fb2207ab7b@BlueTreble.com
2017-01-09 10:10:22 -05:00
Tom Lane de5fed0d0c Merge two copies of tuple-building code in pltcl.c.
Make pltcl_trigger_handler() construct modified tuples using
pltcl_build_tuple_result(), rather than its own copy of essentially
the same logic.  This results in slightly different message wording for
the error cases, and in one case a different SQLSTATE, but it seems
unlikely that any existing applications are depending on any of those
details.

While at it, fix a typo in commit 26abb50c4: pltcl_build_tuple_result was
applying encoding conversion in the wrong direction.  That would be a
back-patchable bug fix, except the code hasn't shipped yet.

Jim Nasby, reviewed by me

Discussion: https://postgr.es/m/d2c6425a-d9e0-f034-f774-4a872c234d89@BlueTreble.com
2017-01-06 16:22:08 -05:00
Peter Eisentraut 933b46644c Use 'use strict' in all Perl programs 2017-01-05 12:34:48 -05:00
Bruce Momjian 1d25779284 Update copyright via script for 2017 2017-01-03 13:48:53 -05:00
Tom Lane 55caaaeba8 Improve handling of array elements as getdiag_targets and cursor_variables.
There's no good reason why plpgsql's GET DIAGNOSTICS statement can't
support an array element as target variable, since the execution code
already uses the generic exec_assign_value() function to assign to it.
Hence, refactor the grammar to allow that, by making getdiag_target
depend on the assign_var production.

Ideally we'd also let a cursor_variable expand to an element of a
refcursor[] array, but that's substantially harder since those statements
also have to handle bound-cursor-variable cases.  For now, just make sure
the reported error is sensible, ie "cursor variable must be a simple
variable" not "variable must be of type cursor or refcursor".  The latter
was quite confusing from the user's viewpoint, since what he wrote
satisfies the claimed restriction.

Per bug #14463 from Zhou Digoal.  Given the lack of previous complaints,
I see no need for a back-patch.

Discussion: https://postgr.es/m/20161213152548.14897.81245@wrigleys.postgresql.org
2016-12-13 16:33:03 -05:00
Tom Lane 9cda81f005 Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't.  This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.

In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions.  And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.

To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.

Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table.  This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.

The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.

The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me.  Back-patch
to all supported branches.

Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
Tom Lane 4ecd197437 Check that result tupdesc has exactly 1 column in return_next scalar case.
This should always be true, but since we're relying on a tuple descriptor
passed from outside pltcl itself, let's check.  Per a gripe from Coverity.
2016-11-15 16:48:19 -05:00
Tom Lane 1833f1a1c3 Simplify code by getting rid of SPI_push, SPI_pop, SPI_restore_connection.
The idea behind SPI_push was to allow transitioning back into an
"unconnected" state when a SPI-using procedure calls unrelated code that
might or might not invoke SPI.  That sounds good, but in practice the only
thing it does for us is to catch cases where a called SPI-using function
forgets to call SPI_connect --- which is a highly improbable failure mode,
since it would be exposed immediately by direct testing of said function.
As against that, we've had multiple bugs induced by forgetting to call
SPI_push/SPI_pop around code that might invoke SPI-using functions; these
are much harder to catch and indeed have gone undetected for years in some
cases.  And we've had to band-aid around some problems of this ilk by
introducing conditional push/pop pairs in some places, which really kind
of defeats the purpose altogether; if we can't draw bright lines between
connected and unconnected code, what's the point?

Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the
underlying state variable _SPI_curid.  It turns out SPI_restore_connection
can go away too, which is a nice side benefit since it was never more than
a kluge.  Provide no-op macros for the deleted functions so as to avoid an
API break for external modules.

A side effect of this removal is that SPI_palloc and allied functions no
longer permit being called when unconnected; they'll throw an error
instead.  The apparent usefulness of the previous behavior was a mirage
as well, because it was depended on by only a few places (which I fixed in
preceding commits), and it posed a risk of allocations being unexpectedly
long-lived if someone forgot a SPI_push call.

Discussion: <20808.1478481403@sss.pgh.pa.us>
2016-11-08 17:39:57 -05:00
Tom Lane 9257f07872 Replace uses of SPI_modifytuple that intend to allocate in current context.
Invent a new function heap_modify_tuple_by_cols() that is functionally
equivalent to SPI_modifytuple except that it always allocates its result
by simple palloc.  I chose however to make the API details a bit more
like heap_modify_tuple: pass a tupdesc rather than a Relation, and use
bool convention for the isnull array.

Use this function in place of SPI_modifytuple at all call sites where the
intended behavior is to allocate in current context.  (There actually are
only two call sites left that depend on the old behavior, which makes me
wonder if we should just drop this function rather than keep it.)

This new function is easier to use than heap_modify_tuple() for purposes
of replacing a single column (or, really, any fixed number of columns).
There are a number of places where it would simplify the code to change
over, but I resisted that temptation for the moment ... everywhere except
in plpgsql's exec_assign_value(); changing that might offer some small
performance benefit, so I did it.

This is on the way to removing SPI_push/SPI_pop, but it seems like
good code cleanup in its own right.

Discussion: <9633.1478552022@sss.pgh.pa.us>
2016-11-08 15:36:44 -05:00
Tom Lane 6d30fb1f75 Make SPI_fnumber() reject dropped columns.
There's basically no scenario where it's sensible for this to match
dropped columns, so put a test for dropped-ness into SPI_fnumber()
itself, and excise the test from the small number of callers that
were paying attention to the case.  (Most weren't :-(.)

In passing, normalize tests at call sites: always reject attnum <= 0
if we're disallowing system columns.  Previously there was a mixture
of "< 0" and "<= 0" tests.  This makes no practical difference since
SPI_fnumber() never returns 0, but I'm feeling pedantic today.

Also, in the places that are actually live user-facing code and not
legacy cruft, distinguish "column not found" from "can't handle
system column".

Per discussion with Jim Nasby; thi supersedes his original patch
that just changed the behavior at one call site.

Discussion: <b2de8258-c4c0-1cb8-7b97-e8538e5c975c@BlueTreble.com>
2016-11-08 13:11:26 -05:00
Tom Lane de4026c673 Use heap_modify_tuple not SPI_modifytuple in pl/python triggers.
The code here would need some change anyway given planned change in
SPI_modifytuple semantics, since this executes after we've exited the
SPI environment.  But really it's better to just use heap_modify_tuple.

While at it, normalize use of SPI_fnumber: make error messages distinguish
no-such-column from can't-set-system-column, and remove test for deleted
column which is going to migrate into SPI_fnumber.  The lack of a check
for system column names is actually a pre-existing bug here, and might
even qualify as a security bug except that we don't have any trusted
version of plpython.
2016-11-08 12:00:24 -05:00
Tom Lane 0d4446083d Use heap_modify_tuple not SPI_modifytuple in pl/perl triggers.
The code here would need some change anyway given planned change in
SPI_modifytuple semantics, since this executes after we've exited the
SPI environment.  But really it's better to just use heap_modify_tuple.
The code's actually shorter this way, and this avoids depending on some
rather indirect reasoning about why the temporary arrays can't be overrun.
(I think the old code is safe, as long as Perl hashes can't contain
duplicate keys; but with this way we don't need that assumption, only
the assumption that SPI_fnumber doesn't return an out-of-range attnum.)

While at it, normalize use of SPI_fnumber: make error messages distinguish
no-such-column from can't-set-system-column, and remove test for deleted
column which is going to migrate into SPI_fnumber.
2016-11-08 11:35:13 -05:00
Tom Lane 7f1bcfb93d Sync pltcl_build_tuple_result's error handling with pltcl_trigger_handler.
Meant to do this in 26abb50c4, but forgot.
2016-11-06 19:22:12 -05:00
Tom Lane 26abb50c49 Support PL/Tcl functions that return composite types and/or sets.
Jim Nasby, rather heavily editorialized by me

Patch: <f2134651-14b3-efeb-f274-c69f3c084031@BlueTreble.com>
2016-11-06 17:56:05 -05:00
Tom Lane 2178cbf40d Modernize result-tuple construction in pltcl_trigger_handler().
Use Tcl_ListObjGetElements instead of Tcl_SplitList.  Aside from being
possibly more efficient in its own right, this means we are no longer
responsible for freeing a malloc'd result array, so we can get rid of
a PG_TRY/PG_CATCH block.

Use heap_form_tuple instead of SPI_modifytuple.  We don't need the
extra generality of the latter, since we're always replacing all
columns.  Nor do we need its memory-context-munging, since at this
point we're already out of the SPI environment.

Per comparison of this code to tuple-building code submitted by Jim Nasby.
I've abandoned the thought of merging the two cases into a single routine,
but we may as well make the older code simpler and faster where we can.
2016-11-06 16:09:57 -05:00
Tom Lane fd2664dcb7 Rationalize and document pltcl's handling of magic ".tupno" array element.
For a very long time, pltcl's spi_exec and spi_execp commands have had
a behavior of storing the current row number as an element of output
arrays, but this was never documented.  Fix that.

For an equally long time, pltcl_trigger_handler had a behavior of silently
ignoring ".tupno" as an output column name, evidently so that the result
of spi_exec could be used directly as a trigger result tuple.  Not sure
how useful that really is, but in any case it's bad that it would break
attempts to use ".tupno" as an actual column name.  We can fix it by not
checking for ".tupno" until after we check for a column name match.  This
comports with the effective behavior of spi_exec[p] that ".tupno" is only
magic when you don't have an actual column named that.

In passing, wordsmith the description of returning modified tuples from
a pltcl trigger.

Noted while working on Jim Nasby's patch to support composite results
from pltcl.  The inability to return trigger tuples using ".tupno" as
a column name is a bug, so back-patch to all supported branches.
2016-11-06 14:43:13 -05:00
Tom Lane fc8b81a291 Need to do SPI_push/SPI_pop around expression evaluation in plpgsql.
We must do this in case the expression evaluation results in calling
another plpgsql function (or, really, anything using SPI).  I missed
the need for this when I converted exec_cast_value() from doing a
simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b.
There is a SPI_push_conditional in InputFunctionCall(), so that there
was no bug before that.

Per bug #14414 from Marcos Castedo.  Add a regression test based on his
example, which was that a plpgsql function in a domain check constraint
didn't work when assigning to a domain-type variable within plpgsql.

Report: <20161106010947.1387.66380@wrigleys.postgresql.org>
2016-11-06 12:09:36 -05:00
Tom Lane 1b00dd0ea0 Improve minor error-handling details in pltcl.
Don't ask Tcl_GetIndexFromObj to store an error message in the interpreter
in cases where the next argument isn't necessarily one of the options
we're asking it to check for.  At best that is a waste of time, and at
worst it might cause an inappropriate error result to get left behind.

Be sure to check for valid syntax (ie, no command arguments) in
pltcl_SPI_lastoid.

Extracted from a larger and otherwise-unrelated patch.

Jim Nasby

Patch: <f2134651-14b3-efeb-f274-c69f3c084031@BlueTreble.com>
2016-11-05 17:32:29 -04:00
Peter Eisentraut eaed88ce12 Add function name to PyArg_ParseTuple()
This causes the supplied function name to appear in any error message,
making the error message friendlier and relieving us from having to
provide our own in some cases.
2016-10-27 15:41:29 -04:00
Peter Eisentraut 84d457edaf Format PL/Python module contents test vertically
It makes it readable again and makes merges more manageable.
2016-10-27 15:41:29 -04:00
Heikki Linnakangas 8eb6337f9f Remove platform-dependent PL/python test case.
Turns out that the output format of Python Decimal isn't totally platform-
independent either. There are other tests for multi-dimensional arrays,
so rather than try to fix this test case, just remove it.

Per buildfarm member prairiedog.
2016-10-26 17:09:18 +03:00
Heikki Linnakangas cfd9c87a54 Only treat Python Lists as array dimensions.
Instead of treating all python sequence types as array dimensions, except
for tuples and various kinds of strings, only treat Python lists as
dimensions. The PyBytes_Check() function used previously is only available
on Python 2.6 and newer, and it was a bit fiddly anyway. The list of
exceptions would require adjustment if Python got a new kind of a sequence
similar to bytes/unicodes/strings, so only checking for Lists seems more
future-proof. The documentation only mentioned using Lists, so this is
closer to what was documented, anyway.

This should fix the buildfarm failures on systems building with Python 2.5,
although I don't have Python 2.5 installed myself to test with.
2016-10-26 14:44:55 +03:00
Heikki Linnakangas 73c8e8506c Avoid using platform-dependent floats in test case.
The number of decimals printed for floats varied in this test case, as
noted by several buildfarm members. There's nothing special about floats
and arrays in the code being tested, so replace the floats with numerics to
make the output platform-independent.
2016-10-26 14:17:07 +03:00
Heikki Linnakangas e131ba4fe5 Fix regression test to also work with Python 2.
Per buildfarm.
2016-10-26 11:18:04 +03:00
Heikki Linnakangas 510e1b8ecf Give a hint, when [] is incorrectly used for a composite type in array.
That used to be accepted, so let's try to give a hint to users on why
their PL/python functions no longer work.

Reviewed by Pavel Stehule.

Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
2016-10-26 10:56:56 +03:00
Heikki Linnakangas 94aceed317 Support multi-dimensional arrays in PL/python.
Multi-dimensional arrays can now be used as arguments to a PL/python function
(used to throw an error), and they can be returned as nested Python lists.

This makes a backwards-incompatible change to the handling of composite
types in arrays. Previously, you could return an array of composite types
as "[[col1, col2], [col1, col2]]", but now that is interpreted as a two-
dimensional array. Composite types in arrays must now be returned as
Python tuples, not lists, to resolve the ambiguity. I.e. "[(col1, col2),
(col1, col2)]".

To avoid breaking backwards-compatibility, when not necessary, () is still
accepted for arrays at the top-level, but it is always treated as a
single-dimensional array. Likewise, [] is still accepted for composite types,
when they are not in an array. Update the documentation to recommend using []
for arrays, and () for composite types, with a mention that those other things
are also accepted in some contexts.

This needs to be mentioned in the release notes.

Alexey Grishchenko, Dave Cramer and me. Reviewed by Pavel Stehule.

Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
2016-10-26 10:56:30 +03:00
Andres Freund ccbb852cd6 Fix further hash table order dependent tests.
Similar to 0137caf273, this makes contrib and pl tests less dependant on
hash-table order.  After this commit, at least some order affecting
changes to execGrouping.c don't result in regression test changes
anymore.
2016-10-12 18:31:45 -07:00
Tom Lane 7107d58ec5 Fix misplacement of submake-generated-headers prerequisites.
The sequence "configure; cd src/pl/plpython; make -j" failed due to
trying to compile plpython's .o files before the generated headers
finished building.  (This is an important real-world case, since it's
the typical second step when building both plpython2 and plpython3.)
This happens because the submake-generated-headers target is not
placed in a way to make it a prerequisite to compiling the .o files.
Fix that.

Checking other uses of submake-generated-headers, I noted that the one
attached to pg_regress was similarly misplaced; but it's actually not
needed at all for pg_regress.o, rather regress.o, so move it to be a
prerequisite of that.

Back-patch to 9.6 where submake-generated-headers was introduced
(by commit 548af97fc).  It's not immediately clear to me why the
previous coding didn't have the same issue; but since we've not
had field reports of plpython make failing, leave it alone in the
older branches.

Pavel Raiskup and Tom Lane

Discussion: <1925924.izSMJEZO3x@unused-4-107.brq.redhat.com>
2016-10-01 13:35:13 -04:00
Tom Lane a4c35ea1c2 Improve parser's and planner's handling of set-returning functions.
Teach the parser to reject misplaced set-returning functions during parse
analysis using p_expr_kind, in much the same way as we do for aggregates
and window functions (cf commit eaccfded9).  While this isn't complete
(it misses nesting-based restrictions), it's much better than the previous
error reporting for such cases, and it allows elimination of assorted
ad-hoc expression_returns_set() error checks.  We could add nesting checks
later if it seems important to catch all cases at parse time.

There is one case the parser will now throw error for although previous
versions allowed it, which is SRFs in the tlist of an UPDATE.  That never
behaved sensibly (since it's ill-defined which generated row should be
used to perform the update) and it's hard to see why it should not be
treated as an error.  It's a release-note-worthy change though.

Also, add a new Query field hasTargetSRFs reporting whether there are
any SRFs in the targetlist (including GROUP BY/ORDER BY expressions).
The parser can now set that basically for free during parse analysis,
and we can use it in a number of places to avoid expression_returns_set
searches.  (There will be more such checks soon.)  In some places, this
allows decontorting the logic since it's no longer expensive to check for
SRFs in the tlist --- so I made the checks parallel to the handling of
hasAggs/hasWindowFuncs wherever it seemed appropriate.

catversion bump because adding a Query field changes stored rules.

Andres Freund and Tom Lane

Discussion: <24639.1473782855@sss.pgh.pa.us>
2016-09-13 13:54:24 -04:00
Peter Eisentraut e0013deb59 Make better use of existing enums in plpgsql
plpgsql.h defines a number of enums, but most of the code passes them
around as ints.  Update structs and function prototypes to take enum
types instead.  This clarifies the struct definitions in plpgsql.h in
particular.

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2016-09-09 12:00:00 -04:00
Tom Lane 6f7c0ea32f Improve memory management for PL/Perl functions.
Unlike PL/Tcl, PL/Perl at least made an attempt to clean up after itself
when a function gets redefined.  But it was still using TopMemoryContext
for the fn_mcxt of argument/result I/O functions, resulting in the
potential for memory leaks depending on what those functions did, and the
retail alloc/free logic was pretty bulky as well.  Fix things to use a
per-function memory context like the other PLs now do.  Tweak a couple of
places where things were being done in a not-very-safe order (on the
principle that a memory leak is better than leaving global state
inconsistent after an error).  Also make some minor cosmetic adjustments,
mostly in field names, to make the code look similar to the way PL/Tcl does
now wherever it's essentially the same logic.

Michael Paquier and Tom Lane

Discussion: <CAB7nPqSOyAsHC6jL24J1B+oK3p=yyNoFU0Vs_B6fd2kdd5g5WQ@mail.gmail.com>
2016-08-31 19:54:58 -04:00
Tom Lane d062245b5b Improve memory management for PL/Tcl functions.
Formerly, the memory used to represent a PL/Tcl function was allocated with
malloc() or in TopMemoryContext, and we'd leak it all if the function got
redefined during the session.  Instead, create a per-function context and
keep everything in or under that context.  Add a reference-counting
mechanism (like the one plpgsql has long had) so that we can safely clean
up an old function definition, either immediately if it's not being
executed or at the end of the outermost execution.

Currently, we only detect that a cached function is obsolete when we next
attempt to call that function.  So this covers the updated-definition case
but leaves cruft around after DROP FUNCTION.  It's not clear whether it's
worth installing a syscache invalidation callback to watch for drops;
none of the other PLs do, so for now we won't do it here either.

Michael Paquier and Tom Lane

Discussion: <CAB7nPqSOyAsHC6jL24J1B+oK3p=yyNoFU0Vs_B6fd2kdd5g5WQ@mail.gmail.com>
2016-08-31 17:27:09 -04:00
Tom Lane ea268cdc9a Add macros to make AllocSetContextCreate() calls simpler and safer.
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters.  While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea.  Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.

While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.

In passing, change TopMemoryContext to use the default allocation
parameters.  Formerly it could only be extended 8K at a time.  That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there.  There seems no good reason
not to let it use growing blocks like most other contexts.

Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain.  The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.

Discussion: <21072.1472321324@sss.pgh.pa.us>
2016-08-27 17:50:38 -04:00
Tom Lane 5697522d84 In plpgsql, don't try to convert int2vector or oidvector to expanded array.
These types are storage-compatible with real arrays, but they don't support
toasting, so of course they can't support expansion either.

Per bug #14289 from Michael Overmeyer.  Back-patch to 9.5 where expanded
arrays were introduced.

Report: <20160818174414.1529.37913@wrigleys.postgresql.org>
2016-08-18 14:49:08 -04:00
Peter Eisentraut 9f31e45a6d Improve formatting of comments in plpgsql.h
This file had some unusual comment layout.  Most of the comments
introducing structs ended up to the right of the screen and following
the start of the struct.  Some comments for struct members ended up
after the member definition.

Fix that by moving comments consistently before what they are
describing.  Also add missing struct tags where missing so that it is
easier to tell what the struct is.
2016-08-18 12:00:00 -04:00
Tom Lane bfaaacc805 Improve plpgsql's memory management to fix some function-lifespan leaks.
In some cases, exiting out of a plpgsql statement due to an error, then
catching the error in a surrounding exception block, led to leakage of
temporary data the statement was working with, because we kept all such
data in the function-lifespan SPI Proc context.  Iterating such behavior
many times within one function call thus led to noticeable memory bloat.

To fix, create an additional memory context meant to have statement
lifespan.  Since many plpgsql statements, particularly the simpler/more
common ones, don't need this, create it only on demand.  Reset this context
at the end of any statement that uses it, and arrange for exception cleanup
to reset it too, thereby fixing the memory-leak issue.  Allow a stack of
such contexts to exist to handle cases where a compound statement needs
statement-lifespan data that persists across calls of inner statements.

While at it, clean up code and improve comments referring to the existing
short-term memory context, which by plpgsql convention is the per-tuple
context of the eval_econtext ExprContext.  We now uniformly refer to that
as the eval_mcontext, whereas the new statement-lifespan memory contexts
are called stmt_mcontext.

This change adds some context-creation overhead, but on the other hand
it allows removal of some retail pfree's in favor of context resets.
On balance it seems to be about a wash performance-wise.

In principle this is a bug fix, but it seems too invasive for a back-patch,
and the infrequency of complaints weighs against taking the risk in the
back branches.  So we'll fix it only in HEAD, at least for now.

Tom Lane, reviewed by Pavel Stehule

Discussion: <17863.1469142152@sss.pgh.pa.us>
2016-08-17 14:51:10 -04:00
Tom Lane 0bb51aa967 Improve parsetree representation of special functions such as CURRENT_DATE.
We implement a dozen or so parameterless functions that the SQL standard
defines special syntax for.  Up to now, that was done by converting them
into more or less ad-hoc constructs such as "'now'::text::date".  That's
messy for multiple reasons: it exposes what should be implementation
details to users, and performance is worse than it needs to be in several
cases.  To improve matters, invent a new expression node type
SQLValueFunction that can represent any of these parameterless functions.

Bump catversion because this changes stored parsetrees for rules.

Discussion: <30058.1463091294@sss.pgh.pa.us>
2016-08-16 20:33:01 -04:00
Tom Lane b5bce6c1ec Final pgindent + perltidy run for 9.6. 2016-08-15 13:42:51 -04:00
Peter Eisentraut 34927b2920 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: cda21c1d7b160b303dc21dfe9d4169f2c8064c60
2016-08-08 11:08:00 -04:00
Tom Lane 95810ed8ee Make pltcl regression tests safe for Danish locale.
Another peculiarity of Danish locale is that it has an unusual idea
of how to sort upper vs. lower case.  One of the pltcl test cases has
an issue with that.  Now that COLLATE works in all supported branches,
we can just change the test to be locale-independent, and get rid of
the variant expected file that used to support non-C locales.
2016-07-21 14:24:07 -04:00
Peter Eisentraut 7d67606569 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3d71988dffd3c0798a8864c55ca4b7833b48abb1
2016-07-18 12:07:49 -04:00
Tom Lane baebab3ace Allow IMPORT FOREIGN SCHEMA within pl/pgsql.
Since IMPORT FOREIGN SCHEMA has an INTO clause, pl/pgsql needs to be
aware of that and avoid capturing the INTO as an INTO-variables clause.
This isn't hard, though it's annoying to have to make IMPORT a plpgsql
keyword just for this.  (Fortunately, we have the infrastructure now
to make it an unreserved keyword, so at least this shouldn't break any
existing pl/pgsql code.)

Per report from Merlin Moncure.  Back-patch to 9.5 where IMPORT FOREIGN
SCHEMA was introduced.

Report: <CAHyXU0wpHf2bbtKGL1gtUEFATCY86r=VKxfcACVcTMQ70mCyig@mail.gmail.com>
2016-07-12 18:07:03 -04:00
Peter Eisentraut 3a4a33ad49 PL/Python: Report argument parsing errors using exceptions
Instead of calling PLy_elog() for reporting Python argument parsing
errors, generate appropriate exceptions.  This matches the existing plpy
functions and is more consistent with the behavior of the Python
argument parsing routines.
2016-07-02 22:53:14 -04:00
Tom Lane 548af97fce Provide and use a makefile target to build all generated headers.
As of 9.6, pg_regress doesn't build unless storage/lwlocknames.h has been
created; but there was nothing forcing that to happen if you just went into
src/test/regress/ and built there.  We previously had a similar complaint
about plpython.

To fix in a way that won't break next time we invent a generated header,
make src/backend/Makefile expose a phony target for updating all the
include files it builds, and invoke that before building pg_regress or
plpython.  In principle, maybe we ought to invoke that everywhere; but
it would add a lot of usually-useless make cycles, so let's just do it
in the places where people have complained.

I made a couple of cosmetic adjustments in src/backend/Makefile as well,
to deal with the generated headers in consistent orders.

Michael Paquier and Tom Lane

Report: <31398.1467036827@sss.pgh.pa.us>
Report: <20150916200959.GB32090@msg.df7cb.de>
2016-07-01 15:09:02 -04:00
Tom Lane 1fe1204e87 Add missing check for malloc failure in plpgsql_extra_checks_check_hook().
Per report from Andreas Seltenreich.  Back-patch to affected versions.

Report: <874m8nn0hv.fsf@elite.ansel.ydns.eu>
2016-06-20 15:36:54 -04:00
Peter Eisentraut 47981a4665 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 0c374f8d25ed31833a10d24252bc928d41438838
2016-06-20 09:48:08 -04:00
Peter Eisentraut f0688d6e6c PL/Python: Clean up extended error reporting docs and tests
Format the example and test code more to Python style standards.
Improve whitespace.  Improve documentation formatting.
2016-06-15 10:34:11 -04:00
Peter Eisentraut 020140d84d PL/Python: Rename new keyword arguments of plpy.error() etc.
Rename schema -> schema_name etc. to remain consistent with C API and
PL/pgSQL.
2016-06-11 19:27:49 -04:00
Robert Haas 4bc424b968 pgindent run for 9.6 2016-06-09 18:02:36 -04:00
Peter Eisentraut 8359077124 PL/Python: Move ereport wrapper test cases to separate file
In commit 5c3c3cd0a3, the new tests were
apparently just dumped into the first convenient file.  Move them to a
separate file dedicated to testing that functionality and leave the
plpython_test test to test basic functionality, as it did before.
2016-06-07 09:39:11 -04:00
Peter Eisentraut 48aaba4acf Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 17bf3e8564abf600274789fcc90e72532d5e7c05
2016-05-09 10:04:41 -04:00
Tom Lane 1d2f9de38d Fix freshly-introduced PL/Python portability bug.
It turns out that those PyErr_Clear() calls I removed from plpy_elog.c
in 7e3bb08038 et al were not quite as random as they appeared: they
mask a Python 2.3.x bug.  (Specifically, it turns out that PyType_Ready()
can fail if the error indicator is set on entry, and PLy_traceback's fetch
of frame.f_code may be the first operation in a session that requires the
"frame" type to be readied.  Ick.)  Put back the clear call, but in a more
centralized place closer to what it's protecting, and this time with a
comment warning what it's really for.

Per buildfarm member prairiedog.  Although prairiedog was only failing
on HEAD, it seems clearly possible for this to occur in older branches
as well, so back-patch to 9.2 the same as the previous patch.
2016-04-11 18:17:20 -04:00
Peter Eisentraut d8ed83cd7f Fix whitespace 2016-04-11 14:44:51 -04:00
Tom Lane 81ba9348d8 Fix missing "volatile" in PLy_output().
Commit 5c3c3cd0a3 plastered "volatile" on a bunch of variables
in PLy_output(), but removed the one that actually mattered, ie the
one on "oldcontext".  This allows some versions of clang to generate
code in which "oldcontext" has been trashed when control reaches the
PG_CATCH block.  Per buildfarm member tick.
2016-04-11 11:49:54 -04:00
Tom Lane f73b2bbbdc Fix poorly thought-through code from commit 5c3c3cd0a3.
It's not entirely clear to me whether PyString_AsString can return
null (looks like the answer might vary between Python 2 and 3).
But in any case, this code's attempt to cope with the possibility
was quite broken, because pstrdup() neither allows a null argument
nor ever returns a null.

Moreover, the code below this point assumes that "message" is a
palloc'd string, which would not be the case for a dgettext result.

Fix both problems by doing the pstrdup step separately.
2016-04-11 00:28:44 -04:00
Tom Lane 7e3bb08038 Fix access-to-already-freed-memory issue in plpython's error handling.
PLy_elog() could attempt to access strings that Python had already freed,
because the strings that PLy_get_spi_error_data() returns are simply
pointers into storage associated with the error "val" PyObject.  That's
fine at the instant PLy_get_spi_error_data() returns them, but just after
that PLy_traceback() intentionally releases the only refcount on that
object, allowing it to be freed --- so that the strings we pass to
ereport() are dangling pointers.

In principle this could result in garbage output or a coredump.  In
practice, I think the risk is pretty low, because there are no Python
operations between where we decrement that refcount and where we use the
strings (and copy them into PG storage), and thus no reason for Python
to recycle the storage.  Still, it's clearly hazardous, and it leads to
Valgrind complaints when running under a Valgrind that hasn't been
lobotomized to ignore Python memory allocations.

The code was a mess anyway: we fetched the error data out of Python
(clearing Python's error indicator) with PyErr_Fetch, examined it, pushed
it back into Python with PyErr_Restore (re-setting the error indicator),
then immediately pulled it back out with another PyErr_Fetch.  Just to
confuse matters even more, there were some gratuitous-and-yet-hazardous
PyErr_Clear calls in the "examine" step, and we didn't get around to doing
PyErr_NormalizeException until after the second PyErr_Fetch, making it even
less clear which object was being manipulated where and whether we still
had a refcount on it.  (If PyErr_NormalizeException did substitute a
different "val" object, it's possible that the problem could manifest for
real, because then we'd be doing assorted Python stuff with no refcount
on the object we have string pointers into.)

So, rearrange all that into some semblance of sanity, and don't decrement
the refcount on the Python error objects until the end of PLy_elog().
In HEAD, I failed to resist the temptation to reformat some messy bits
from 5c3c3cd0a3 along the way.

Back-patch as far as 9.2, because the code is substantially the same
that far back.  I believe that 9.1 has the bug as well; but the code
around it is rather different and I don't want to take a chance on
breaking something for what seems a low-probability problem.
2016-04-10 23:16:10 -04:00
Tom Lane c7a141a986 Fix PL/Python ereport() test to work on Python 2.3.
Per buildfarm.

Pavel Stehule
2016-04-09 16:44:54 -04:00
Teodor Sigaev 5c3c3cd0a3 Enhanced custom error in PLPythonu
Patch adds a new, more rich,  way to emit error message or exception from
PL/Pythonu code.

Author: Pavel Stehule
Reviewers: Catalin Iacob, Peter Eisentraut, Jim Nasby
2016-04-08 18:33:06 +03:00
Tom Lane 1d2fe56e42 Fix PL/Python for recursion and interleaved set-returning functions.
PL/Python failed if a PL/Python function was invoked recursively via SPI,
since arguments are passed to the function in its global dictionary
(a horrible decision that's far too ancient to undo) and it would delete
those dictionary entries on function exit, leaving the outer recursion
level(s) without any arguments.  Not deleting them would be little better,
since the outer levels would then see the innermost level's arguments.

Since PL/Python uses ValuePerCall mode for evaluating set-returning
functions, it's possible for multiple executions of the same SRF to be
interleaved within a query.  PL/Python failed in such a case, because
it stored only one iterator per function, directly in the function's
PLyProcedure struct.  Moreover, one interleaved instance of the SRF
would see argument values that should belong to another.

Hence, invent code for saving and restoring the argument entries.  To fix
the recursion case, we only need to save at recursive entry and restore
at recursive exit, so the overhead in non-recursive cases is negligible.
To fix the SRF case, we have to save when suspending a SRF and restore
when resuming it, which is potentially not negligible; but fortunately
this is mostly a matter of manipulating Python object refcounts and
should not involve much physical data copying.

Also, store the Python iterator and saved argument values in a structure
associated with the SRF call site rather than the function itself.  This
requires adding a memory context deletion callback to ensure that the SRF
state is cleaned up if the calling query exits before running the SRF to
completion.  Without that we'd leak a refcount to the iterator object in
such a case, resulting in session-lifespan memory leakage.  (In the
pre-existing code, there was no memory leak because there was only one
iterator pointer, but what would happen is that the previous iterator
would be resumed by the next query attempting to use the SRF.  Hardly the
semantics we want.)

We can buy back some of whatever overhead we've added by getting rid of
PLy_function_delete_args(), which seems a useless activity: there is no
need to delete argument entries from the global dictionary on exit,
since the next time anyone would see the global dict is on the next
fresh call of the PL/Python function, at which time we'd overwrite those
entries with new arg values anyway.

Also clean up some really ugly coding in the SRF implementation, including
such gems as returning directly out of a PG_TRY block.  (The only reason
that failed to crash hard was that all existing call sites immediately
exited their own PG_TRY blocks, popping the dangling longjmp pointer before
there was any chance of it being used.)

In principle this is a bug fix; but it seems a bit too invasive relative to
its value for a back-patch, and besides the fix depends on memory context
callbacks so it could not go back further than 9.5 anyway.

Alexey Grishchenko and Tom Lane
2016-04-05 14:51:19 -04:00
Noah Misch 4ad6f13500 Copyedit comments and documentation. 2016-04-01 21:53:10 -04:00
Tom Lane 9f73a2f6d1 Fix PL/Tcl for vpath builds.
Commit cd37bb7859 works for in-tree builds, but not so much for
VPATH.  Per buildfarm.
2016-03-25 17:13:03 -04:00
Tom Lane cd37bb7859 Improve PL/Tcl errorCode facility by providing decoded name for SQLSTATE.
We don't really want to encourage people to write numeric SQLSTATEs in
programs; that's unreadable and error-prone.  Copy plpgsql's infrastructure
for converting between SQLSTATEs and exception names shown in Appendix A,
and modify examples in tests and documentation to do it that way.
2016-03-25 16:54:52 -04:00
Tom Lane fb8d2a7f57 In PL/Tcl, make database errors return additional info in the errorCode.
Tcl has a convention for returning additional info about an error in a
global variable named errorCode.  Up to now PL/Tcl has ignored that,
but this patch causes database errors caught by PL/Tcl to fill in
errorCode with useful information from the ErrorData struct.

Jim Nasby, reviewed by Pavel Stehule and myself
2016-03-25 15:52:53 -04:00
Tom Lane 07341a2980 Update PL/Perl's comment about hv_store().
Negative klen is documented since Perl 5.16, and 5.6 is no longer
supported so no need to comment about it.

Dagfinn Ilmari Mannsåker
2016-03-14 14:45:45 -04:00
Tom Lane f3f3aae4b7 Improve conversions from uint64 to Perl types.
Perl's integers are pointer-sized, so can hold more than INT_MAX on LP64
platforms, and come in both signed (IV) and unsigned (UV).  Floating
point values (NV) may also be larger than double.

Since Perl 5.19.4 array indices are SSize_t instead of I32, so allow up
to SSize_t_max on those versions.  The limit is not imposed just by
av_extend's argument type, but all the array handling code, so remove
the speculative comment.

Dagfinn Ilmari Mannsåker
2016-03-14 14:38:44 -04:00
Tom Lane 23a27b039d Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout.  Some of these values were declared uint32 before, and
others "long".

I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.

The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command.  Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.

Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long".  It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.

Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 16:05:29 -05:00
Andres Freund e66197fa2e plperl: Correctly handle empty arrays in plperl_ref_from_pg_array.
plperl_ref_from_pg_array() didn't consider the case that postgrs arrays
can have 0 dimensions (when they're empty) and accessed the first
dimension without a check. Fix that by special casing the empty array
case.

Author: Alex Hunsaker
Reported-By: Andres Freund / valgrind / buildfarm animal skink
Discussion: 20160308063240.usnzg6bsbjrne667@alap3.anarazel.de
Backpatch: 9.1-
2016-03-08 13:42:57 -08:00
Magnus Hagander 6c90996a4c Add prefix to pl/pgsql global variables and functions
Rename pl/pgsql global variables to always have a plpgsql_ prefix,
so they don't conflict with other shared libraries loaded.
2016-03-03 10:45:59 +01:00