Commit Graph

1633 Commits

Author SHA1 Message Date
Tom Lane 820c0305f6 Support domains over composite types in PL/Tcl.
Since PL/Tcl does little with SQL types internally, this is just a
matter of making it work with composite-domain function arguments
and results.

In passing, make it allow RECORD-type arguments --- that's a trivial
change that nobody had bothered with up to now.

Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
2017-10-26 16:00:17 -04:00
Peter Eisentraut 0008a106d4 Use Py_RETURN_NONE where suitable
This is more idiomatic style and available as of Python 2.4, which is
our minimum.
2017-09-29 16:51:39 -04:00
Peter Eisentraut 65c8656202 Fix plperl build
The changes in 639928c988 turned out to
require Perl 5.9.3, which is newer than our minimum required version.
So revert back to the old code for the normal case and only use the new
variant when both coverage and vpath are used.  As the minimum Perl
version moves forward, we can drop the old code sometime.
2017-09-27 15:51:04 -04:00
Peter Eisentraut 639928c988 Improve vpath support in plperl build
Run xsubpp with the -output option instead of redirecting stdout.  That
ensures that the #line directives in the output file point to the right
place in a vpath build.  This in turn fixes an error in coverage builds
that it can't find the source files.

Refactor the makefile rules while we're here.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2017-09-27 11:51:21 -04:00
Tom Lane b8060e41b5 Prefer argument name over "$n" for the refname of a plpgsql argument.
If a function argument has a name, use that as the "refname" of the
PLpgSQL_datum representing the argument, instead of $n as before.
This allows better error messages in some cases.

Pavel Stehule, reviewed by Jeevan Chalke

Discussion: https://postgr.es/m/CAFj8pRB9GyU2U1Sb2ssgP26DZ_yq-FYDfpvUvGQ=k4R=yOPVjg@mail.gmail.com
2017-09-11 16:24:43 -04:00
Andres Freund 2cd7084524 Change tupledesc->attrs[n] to TupleDescAttr(tupledesc, n).
This is a mechanical change in preparation for a later commit that
will change the layout of TupleDesc.  Introducing a macro to abstract
the details of where attributes are stored will allow us to change
that in separate step and revise it in future.

Author: Thomas Munro, editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com
2017-08-20 11:19:07 -07:00
Tom Lane b73f1b5c29 Make simpler-simple-expressions code cope with a Gather plan.
Commit 00418c612 expected that the plan generated for a simple-expression
query would always be a plain Result node.  However, if force_parallel_mode
is on, the planner might stick a Gather atop that.  Cope by looking through
the Gather.  For safety, assert that the Gather's tlist is trivial.

Per buildfarm.

Discussion: https://postgr.es/m/23425.1502822098@sss.pgh.pa.us
2017-08-15 16:49:52 -04:00
Tom Lane 00418c6124 Simplify plpgsql's check for simple expressions.
plpgsql wants to recognize expressions that it can execute directly
via ExecEvalExpr() instead of going through the full SPI machinery.
Originally the test for this consisted of recursively groveling through
the post-planning expression tree to see if it contained only nodes that
plpgsql recognized as safe.  That was a major maintenance headache, since
it required updating plpgsql every time we added any kind of expression
node.  It was also kind of expensive, so over time we added various
pre-planning checks to try to short-circuit having to do that.
Robert Haas pointed out that as of the SRF-processing changes in v10,
particularly the addition of Query.hasTargetSRFs, there really isn't
any reason to make the recursive scan at all: the initial checks cover
everything we really care about.  We do have to make sure that those
checks agree with what inline_function() considers, so that inlining
of a function that formerly wasn't inlined can't cause an expression
considered simple to become non-simple.

Hence, delete the recursive function exec_simple_check_node(), and tweak
those other tests to more exactly agree with inline_function().  Adjust
some comments and function naming to match.

Discussion: https://postgr.es/m/CA+TgmoZGZpwdEV2FQWaVxA_qZXsQE1DAS5Fu8fwxXDNvfndiUQ@mail.gmail.com
2017-08-15 12:28:39 -04:00
Peter Eisentraut f7668b2b35 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 1a0b5e655d7871506c2b1c7ba562c2de6b6a55de
2017-08-07 13:55:34 -04:00
Peter Eisentraut 26d40ada3f Message style improvements 2017-08-04 18:31:32 -04:00
Tom Lane b4cc35fbb7 Tighten coding for non-composite case in plperl's return_next.
Coverity complained about this code's practice of using scalar variables
as single-element arrays.  While that's really just nitpicking, it probably
is more readable to declare them as arrays, so let's do that.  A more
important point is that the code was just blithely assuming that the
result tupledesc has exactly one column; if it doesn't, we'd likely get
a crash of some sort in tuplestore_putvalues.  Since the tupledesc is
manufactured outside of plperl, that seems like an uncomfortably long
chain of assumptions.  We can nail it down at little cost with a sanity
check earlier in the function.
2017-07-31 11:33:46 -04:00
Tom Lane 3c163a7fc7 PL/Perl portability fix: absorb relevant -D switches from Perl.
The Perl documentation is very clear that stuff calling libperl should
be built with the compiler switches shown by Perl's $Config{ccflags}.
We'd been ignoring that up to now, and mostly getting away with it,
but recent Perl versions contain ABI compatibility cross-checks that
fail on some builds because of this omission.  In particular the
sizeof(PerlInterpreter) can come out different due to some fields being
added or removed; which means we have a live ABI hazard that we'd better
fix rather than continuing to sweep it under the rug.

However, it still seems like a bad idea to just absorb $Config{ccflags}
verbatim.  In some environments Perl was built with a different compiler
that doesn't even use the same switch syntax.  -D switch syntax is pretty
universal though, and absorbing Perl's -D switches really ought to be
enough to fix the problem.

Furthermore, Perl likes to inject stuff like -D_LARGEFILE_SOURCE and
-D_FILE_OFFSET_BITS=64 into $Config{ccflags}, which affect libc ABIs on
platforms where they're relevant.  Adopting those seems dangerous too.
It's unclear whether a build wherein Perl and Postgres have different ideas
of sizeof(off_t) etc would work, or whether anyone would care about making
it work.  But it's dead certain that having different stdio ABIs in
core Postgres and PL/Perl will not work; we've seen that movie before.
Therefore, let's also ignore -D switches for symbols beginning with
underscore.  The symbols that we actually need to import should be the ones
mentioned in perl.h's PL_bincompat_options stanza, and none of those start
with underscore, so this seems likely to work.  (If it turns out not to
work everywhere, we could consider intersecting the symbols mentioned in
PL_bincompat_options with the -D switches.  But that will be much more
complicated, so let's try this way first.)

This will need to be back-patched, but first let's see what the
buildfarm makes of it.

Ashutosh Sharma, some adjustments by me

Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
2017-07-28 14:25:28 -04:00
Tom Lane bebe174bb4 PL/Perl portability fix: avoid including XSUB.h in plperl.c.
In Perl builds that define PERL_IMPLICIT_SYS, XSUB.h defines macros
that replace a whole lot of basic libc functions with Perl functions.
We can't tolerate that in plperl.c; it breaks at least PG_TRY and
probably other stuff.  The core idea of this patch is to include XSUB.h
only in the .xs files where it's really needed, and to move any code
broken by PERL_IMPLICIT_SYS out of the .xs files and into plperl.c.

The reason this hasn't been a problem before is that our build techniques
did not result in PERL_IMPLICIT_SYS appearing as a #define in PL/Perl,
even on some platforms where Perl thinks it is defined.  That's about to
change in order to fix a nasty portability issue, so we need this work to
make the code safe for that.

Rather unaccountably, the Perl people chose XSUB.h as the place to provide
the versions of the aTHX/aTHX_ macros that are needed by code that's not
explicitly aware of the MULTIPLICITY API conventions.  Hence, just removing
XSUB.h from plperl.c fails miserably.  But we can work around that by
defining PERL_NO_GET_CONTEXT (which would make the relevant stanza of
XSUB.h a no-op anyway).  As explained in perlguts.pod, that means we need
to add a "dTHX" macro call in every C function that calls a Perl API
function.  In most of them we just add this at the top; but since the
macro fetches the current Perl interpreter pointer, more care is needed
in functions that switch the active interpreter.  Lack of the macro is
easily recognized since it results in bleats about "my_perl" not being
defined.

(A nice side benefit of this is that it significantly reduces the number
of fetches of the current interpreter pointer.  On my machine, plperl.so
gets more than 10% smaller, and there's probably some performance win too.
We could reduce the number of fetches still more by decorating the code
with pTHX_/aTHX_ macros to pass the interpreter pointer around, as
explained by perlguts.pod; but that's a task for another day.)

Formatting note: pgindent seems happy to treat "dTHX;" as a declaration
so long as it's the first thing after the left brace, as we'd already
observed with respect to the similar macro "dSP;".  If you try to put
it later in a set of declarations, pgindent puts ugly extra space
around it.

Having removed XSUB.h from plperl.c, we need only move the support
functions for spi_return_next and util_elog (both of which use PG_TRY)
out of the .xs files and into plperl.c.  This seems sufficient to
avoid the known problems caused by PERL_IMPLICIT_SYS, although we
could move more code if additional issues emerge.

This will need to be back-patched, but first let's see what the
buildfarm makes of it.

Patch by me, with some help from Ashutosh Sharma

Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
2017-07-28 12:25:43 -04:00
Alvaro Herrera 6c774caf0e Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: c5a8de3653bb1af6b0eb41cc6bf090c5522df52b
2017-07-10 11:53:55 -04:00
Peter Eisentraut 898d24ae2a PL/Python: Fix hint about returning composite type from Python
('foo') is not a Python tuple: it is a string wrapped in parentheses.  A
valid 1-element Python tuple is ('foo',).

Author: Daniele Varrazzo <daniele.varrazzo@gmail.com>
2017-06-30 16:51:14 -04:00
Tom Lane 382ceffdf7 Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.

By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis.  However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent.  That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.

This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:35:54 -04:00
Tom Lane c7b8998ebb Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.

Commit e3860ffa4d wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code.  The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there.  BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs.  So the
net result is that in about half the cases, such comments are placed
one tab stop left of before.  This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.

Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:19:25 -04:00
Tom Lane e3860ffa4d Initial pgindent run with pg_bsd_indent version 2.0.
The new indent version includes numerous fixes thanks to Piotr Stefaniak.
The main changes visible in this commit are:

* Nicer formatting of function-pointer declarations.
* No longer unexpectedly removes spaces in expressions using casts,
  sizeof, or offsetof.
* No longer wants to add a space in "struct structname *varname", as
  well as some similar cases for const- or volatile-qualified pointers.
* Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
* Fixes bug where comments following declarations were sometimes placed
  with no space separating them from the code.
* Fixes some odd decisions for comments following case labels.
* Fixes some cases where comments following code were indented to less
  than the expected column 33.

On the less good side, it now tends to put more whitespace around typedef
names that are not listed in typedefs.list.  This might encourage us to
put more effort into typedef name collection; it's not really a bug in
indent itself.

There are more changes coming after this round, having to do with comment
indentation and alignment of lines appearing within parentheses.  I wanted
to limit the size of the diffs to something that could be reviewed without
one's eyes completely glazing over, so it seemed better to split up the
changes as much as practical.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 14:39:04 -04:00
Dean Rasheed d3c3f2b1e2 Teach PL/pgSQL about partitioned tables.
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update a couple of places in
PL/pgSQL to handle it. Specifically plpgsql_parse_cwordtype() and
build_row_from_class() needed updating in order to make table%ROWTYPE
and table.col%TYPE work for partitioned tables.

Dean Rasheed, reviewed by Amit Langote.

Discussion: https://postgr.es/m/CAEZATCUnNOKN8sLML9jUzxecALWpEXK3a3W7y0PgFR4%2Buhgc%3Dg%40mail.gmail.com
2017-06-14 09:00:01 +01:00
Alvaro Herrera 08aa550694 Update expected file
Missed in ea3e310e71.
2017-05-25 14:41:43 -04:00
Alvaro Herrera ea3e310e71 Fix message case 2017-05-25 13:16:00 -04:00
Bruce Momjian ce55481032 Post-PG 10 beta1 pgperltidy run 2017-05-17 19:01:23 -04:00
Bruce Momjian a6fd7b7a5f Post-PG 10 beta1 pgindent run
perltidy run not included.
2017-05-17 16:31:56 -04:00
Peter Eisentraut 82d24bab75 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 398beeef4921df0956f917becd7b5669d2a8a5c4
2017-05-15 12:19:54 -04:00
Tom Lane 8f0530f580 Improve castNode notation by introducing list-extraction-specific variants.
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type.  For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))".  Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.

As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.

Patch by me, after an idea of Andrew Gierth's.

Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
2017-04-10 13:51:53 -04:00
Peter Eisentraut afd79873a0 Capitalize names of PLs consistently
Author: Daniel Gustafsson <daniel@yesql.se>
2017-04-05 00:38:25 -04:00
Kevin Grittner 5ebeb579b9 Follow-on cleanup for the transition table patch.
Commit 59702716 added transition table support to PL/pgsql so that
SQL queries in trigger functions could access those transient
tables.  In order to provide the same level of support for PL/perl,
PL/python and PL/tcl, refactor the relevant code into a new
function SPI_register_trigger_data.  Call the new function in the
trigger handler of all four PLs, and document it as a public SPI
function so that authors of out-of-tree PLs can do the same.

Also get rid of a second QueryEnvironment object that was
maintained by PL/pgsql.  That was previously used to deal with
cursors, but the same approach wasn't appropriate for PLs that are
less tangled up with core code.  Instead, have SPI_cursor_open
install the connection's current QueryEnvironment, as already
happens for SPI_execute_plan.

While in the docs, remove the note that transition tables were only
supported in C and PL/pgSQL triggers, and correct some ommissions.

Thomas Munro with some work by Kevin Grittner (mostly docs)
2017-04-04 18:36:39 -05:00
Kevin Grittner 5970271632 Add transition table support to plpgsql.
Kevin Grittner and Thomas Munro
Reviewed by Heikki Linnakangas, David Fetter, and Thomas Munro
with valuable comments and suggestions from many others
2017-03-31 23:30:08 -05:00
Peter Eisentraut 70ec3f1f8f PL/Python: Add cursor and execute methods to plan object
Instead of

    plan = plpy.prepare(...)
    res = plpy.execute(plan, ...)

you can now write

    plan = plpy.prepare(...)
    res = plan.execute(...)

or even

    res = plpy.prepare(...).execute(...)

and similarly for the cursor() method.

This is more in object oriented style, and makes the hybrid nature of
the existing execute() function less confusing.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2017-03-27 11:37:22 -04:00
Peter Eisentraut facde2a98f Clean up Perl code according to perlcritic
Fix all perlcritic warnings of severity level 5, except in
src/backend/utils/Gen_dummy_probes.pl, which is automatically generated.

Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2017-03-27 08:18:22 -04:00
Tom Lane 244dd95ce9 Update some obsolete comments.
Fix a few stray references to expression eval functions that don't
exist anymore or don't take the same input representation they used to.
2017-03-26 11:36:46 -04:00
Andres Freund b8d7f053c5 Faster expression evaluation and targetlist projection.
This replaces the old, recursive tree-walk based evaluation, with
non-recursive, opcode dispatch based, expression evaluation.
Projection is now implemented as part of expression evaluation.

This both leads to significant performance improvements, and makes
future just-in-time compilation of expressions easier.

The speed gains primarily come from:
- non-recursive implementation reduces stack usage / overhead
- simple sub-expressions are implemented with a single jump, without
  function calls
- sharing some state between different sub-expressions
- reduced amount of indirect/hard to predict memory accesses by laying
  out operation metadata sequentially; including the avoidance of
  nearly all of the previously used linked lists
- more code has been moved to expression initialization, avoiding
  constant re-checks at evaluation time

Future just-in-time compilation (JIT) has become easier, as
demonstrated by released patches intended to be merged in a later
release, for primarily two reasons: Firstly, due to a stricter split
between expression initialization and evaluation, less code has to be
handled by the JIT. Secondly, due to the non-recursive nature of the
generated "instructions", less performance-critical code-paths can
easily be shared between interpreted and compiled evaluation.

The new framework allows for significant future optimizations. E.g.:
- basic infrastructure for to later reduce the per executor-startup
  overhead of expression evaluation, by caching state in prepared
  statements.  That'd be helpful in OLTPish scenarios where
  initialization overhead is measurable.
- optimizing the generated "code". A number of proposals for potential
  work has already been made.
- optimizing the interpreter. Similarly a number of proposals have
  been made here too.

The move of logic into the expression initialization step leads to some
backward-incompatible changes:
- Function permission checks are now done during expression
  initialization, whereas previously they were done during
  execution. In edge cases this can lead to errors being raised that
  previously wouldn't have been, e.g. a NULL array being coerced to a
  different array type previously didn't perform checks.
- The set of domain constraints to be checked, is now evaluated once
  during expression initialization, previously it was re-built
  every time a domain check was evaluated. For normal queries this
  doesn't change much, but e.g. for plpgsql functions, which caches
  ExprStates, the old set could stick around longer.  The behavior
  around might still change.

Author: Andres Freund, with significant changes by Tom Lane,
	changes by Heikki Linnakangas
Reviewed-By: Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
2017-03-25 14:52:06 -07:00
Robert Haas 61c2e1a95f Improve access to parallel query from procedural languages.
In SQL, the ability to use parallel query was previous contingent on
fcache->readonly_func, which is only set for non-volatile functions;
but the volatility of a function has no bearing on whether queries
inside it can use parallelism.  Remove that condition.

SPI_execute and SPI_execute_with_args always run the plan just once,
though not necessarily to completion.  Given the changes in commit
691b8d5928, it's sensible to pass
CURSOR_OPT_PARALLEL_OK here, so do that.  This improves access to
parallelism for any caller that uses these functions to execute
queries.  Such callers include plperl, plpython, pltcl, and plpgsql,
though it's not the case that they all use these functions
exclusively.

In plpgsql, allow parallel query for plain SELECT queries (as
opposed to PERFORM, which already worked) and for plain expressions
(which probably won't go through the executor at all, because they
will likely be simple expressions, but if they do then this helps).

Rafia Sabih and Robert Haas, reviewed by Dilip Kumar and Amit Kapila

Discussion: http://postgr.es/m/CAOGQiiMfJ+4SQwgG=6CVHWoisiU0+7jtXSuiyXBM3y=A=eJzmg@mail.gmail.com
2017-03-24 14:46:33 -04:00
Robert Haas f120b614e0 plpgsql: Don't generate parallel plans for RETURN QUERY.
Commit 7aea8e4f2d allowed a parallel
plan to be generated when for a RETURN QUERY or RETURN QUERY EXECUTE
statement in a PL/pgsql block, but that's a bad idea because plplgsql
asks the executor for 50 rows at a time.  That means that we'll always
be running serially a plan that was intended for parallel execution,
which is not a good idea.  Fix by not requesting a parallel plan from
the outset.

Per discussion, back-patch to 9.6.  There is a slight risk that, due
to optimizer error, somebody could have a case where the parallel plan
executed serially is actually faster than the supposedly-best serial
plan, but the consensus seems to be that that's not sufficient
justification for leaving 9.6 unpatched.

Discussion: http://postgr.es/m/CA+TgmoZ_ZuH+auEeeWnmtorPsgc_SmP+XWbDsJ+cWvWBSjNwDQ@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
2017-03-24 12:30:39 -04:00
Peter Eisentraut f97a028d8e Spelling fixes in code comments
From: Josh Soref <jsoref@gmail.com>
2017-03-14 12:58:39 -04:00
Noah Misch 3a0d473192 Use wrappers of PG_DETOAST_DATUM_PACKED() more.
This makes almost all core code follow the policy introduced in the
previous commit.  Specific decisions:

- Text search support functions with char* and length arguments, such as
  prsstart and lexize, may receive unaligned strings.  I doubt
  maintainers of non-core text search code will notice.

- Use plain VARDATA() on values detoasted or synthesized earlier in the
  same function.  Use VARDATA_ANY() on varlenas sourced outside the
  function, even if they happen to always have four-byte headers.  As an
  exception, retain the universal practice of using VARDATA() on return
  values of SendFunctionCall().

- Retain PG_GETARG_BYTEA_P() in pageinspect.  (Page images are too large
  for a one-byte header, so this misses no optimization.)  Sites that do
  not call get_page_from_raw() typically need the four-byte alignment.

- For now, do not change btree_gist.  Its use of four-byte headers in
  memory is partly entangled with storage of 4-byte headers inside
  GBT_VARKEY, on disk.

- For now, do not change gtrgm_consistent() or gtrgm_distance().  They
  incorporate the varlena header into a cache, and there are multiple
  credible implementation strategies to consider.
2017-03-12 19:35:34 -04:00
Tom Lane b58fd4a9ca Add a "subtransaction" command to PL/Tcl.
This allows rolling back the effects of some SPI commands without
having to fail the entire PL/Tcl function.

Victor Wagner, reviewed by Pavel Stehule

Discussion: https://postgr.es/m/20170108205750.2dab04a1@wagner.wagner.home
2017-03-11 14:37:05 -05:00
Tom Lane 08da52859a Bring plpgsql into line with header inclusion policy.
We have a project policy that every .c file should start by including
postgres.h, postgres_fe.h, or c.h as appropriate; and then there is no
need for any .h file to explicitly include any of these.  (The core
reason for this policy is to make it easy to verify that pg_config_os.h
is included before any system headers such as <stdio.h>; without that,
we have portability issues on some platforms due to variation in largefile
options across different modules in the backend.  Also, if .h files were
responsible for choosing which of these key headers to include, .h files
that need to be includable in either frontend or backend compiles would be
in trouble.)

plpgsql was blithely ignoring this policy, so whack it upside the head
until it complies.  I also chose to standardize on including plpgsql's
own .h files after all core-system headers that it pulls in.  That
could've been done either way, but this way seems saner.

Discussion: https://postgr.es/m/CAEepm=2zCoeq3QxVwhS5DFeUh=yU6z81pbWMgfOB8OzyiBwxzw@mail.gmail.com
Discussion: https://postgr.es/m/11634.1488932128@sss.pgh.pa.us
2017-03-08 17:21:08 -05:00
Tom Lane 0d2b1f305d Invent start_proc parameters for PL/Tcl.
Define GUCs pltcl.start_proc and pltclu.start_proc.  When set to a
nonempty value at the time a new Tcl interpreter is created, the
parameterless pltcl or pltclu function named by the GUC is called to
allow user-controlled initialization to occur within the interpreter.
This is modeled on plv8's start_proc parameter, and also has much in
common with plperl's on_init feature.  It allows users to fully
replace the "modules" feature that was removed in commit 817f2a586.

Since an initializer function could subvert later Tcl code in nearly
arbitrary ways, mark both GUCs as SUSET for now.  It would be nice
to find a way to relax that someday; but the corresponding GUCs in
plperl are also SUSET, and there's not been much complaint.

Discussion: https://postgr.es/m/22067.1488046447@sss.pgh.pa.us
2017-03-07 12:40:44 -05:00
Tom Lane 817f2a5863 Remove PL/Tcl's "module" facility.
PL/Tcl has long had a facility whereby Tcl code could be autoloaded from
a database table named "pltcl_modules".  However, nobody is using it, as
evidenced by the recent discovery that it's never been fixed to work with
standard_conforming_strings turned on.  Moreover, it's rather shaky from
a security standpoint, and the table design is very old and crufty (partly
because it dates from before we had TOAST).  A final problem is that
because the table-population scripts depend on the Tcl client library
Pgtcl, which we removed from the core distribution in 2004, it's
impossible to create a self-contained regression test for the feature.
Rather than try to surmount these problems, let's just remove it.

A follow-on patch will provide a way to execute user-defined
initialization code, similar to features that exist in plperl and plv8.
With that, it will be possible to implement this feature or similar ones
entirely in userspace, which is where it belongs.

Discussion: https://postgr.es/m/22067.1488046447@sss.pgh.pa.us
2017-02-27 11:20:22 -05:00
Tom Lane 9e3755ecb2 Remove useless duplicate inclusions of system header files.
c.h #includes a number of core libc header files, such as <stdio.h>.
There's no point in re-including these after having read postgres.h,
postgres_fe.h, or c.h; so remove code that did so.

While at it, also fix some places that were ignoring our standard pattern
of "include postgres[_fe].h, then system header files, then other Postgres
header files".  While there's not any great magic in doing it that way
rather than system headers last, it's silly to have just a few files
deviating from the general pattern.  (But I didn't attempt to enforce this
globally, only in files I was touching anyway.)

I'd be the first to say that this is mostly compulsive neatnik-ism,
but over time it might save enough compile cycles to be useful.
2017-02-25 16:12:55 -05:00
Peter Eisentraut 38d103763d Make more use of castNode() 2017-02-21 11:59:09 -05:00
Peter Eisentraut 04aad40186 Drop support for Python 2.3
There is no specific reason for this right now, but keeping support for
old Python versions around indefinitely increases the maintenance
burden.  The oldest supported Python version is now Python 2.4, which is
still shipped in RHEL/CentOS 5 by default.

In configure, add a check for the required Python version and give a
friendly error message for an old version, instead of relying on an
obscure build error later on.
2017-02-21 09:49:22 -05:00
Tom Lane 170511b30d Adjust PL/Tcl regression test to dodge a possible bug or zone dependency.
One case in the PL/Tcl tests is observed to fail on RHEL5 with a Turkish
time zone setting.  It's not clear if this is an old Tcl bug or something
odd about the zone data, but in any case that test is meant to see if the
Tcl [clock] command works at all, not what its corner-case behaviors are.
Therefore we have no need to test exactly which week a Sunday midnight is
considered to fall into.  Probe the following Tuesday instead.

Discussion: https://postgr.es/m/797.1487517822@sss.pgh.pa.us
2017-02-19 16:14:52 -05:00
Heikki Linnakangas 181bdb90ba Fix typos in comments.
Backpatch to all supported versions, where applicable, to make backpatching
of future fixes go more smoothly.

Josh Soref

Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
2017-02-06 11:33:58 +02:00
Tom Lane 7afd56c3c6 Use castNode() in a bunch of statement-list-related code.
When I wrote commit ab1f0c822, I really missed the castNode() macro that
Peter E. had proposed shortly before.  This back-fills the uses I would
have put it to.  It's probably not all that significant, but there are
more assertions here than there were before, and conceivably they will
help catch any bugs associated with those representation changes.

I left behind a number of usages like "(Query *) copyObject(query_var)".
Those could have been converted as well, but Peter has proposed another
notational improvement that would handle copyObject cases automatically,
so I let that be for now.
2017-01-26 22:09:34 -05:00
Tom Lane c0ef456b97 Volatile-ize some plperl variables that must survive into PG_CATCH blocks.
This appears to be necessary to fix a failure seen on buildfarm member
sittella.  It shouldn't be necessary according to the letter of the C
standard, because we don't change the values of these variables within
the PG_TRY blocks; but somehow gcc 4.7.2 is dropping the ball.

Discussion: https://postgr.es/m/17555.1485179975@sss.pgh.pa.us
2017-01-23 09:15:49 -05:00
Peter Eisentraut f21a563d25 Move some things from builtins.h to new header files
This avoids that builtins.h has to include additional header files.
2017-01-20 20:29:53 -05:00
Andres Freund ea15e18677 Remove obsoleted code relating to targetlist SRF evaluation.
Since 69f4b9c plain expression evaluation (and thus normal projection)
can't return sets of tuples anymore. Thus remove code dealing with
that possibility.

This will require adjustments in external code using
ExecEvalExpr()/ExecProject() - that should neither be hard nor very
common.

Author: Andres Freund and Tom Lane
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
2017-01-19 14:40:41 -08:00
Andres Freund 8b07aee8c5 Adapt python regression tests to 69f4b9c85f.
Hopefully this'll unbreak the buildfarm.
2017-01-18 16:11:19 -08:00
Tom Lane ab1f0c8225 Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements.  It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list.  This patch brings
similar consistency to the outputs of raw parsing and planning steps:

* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.

* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements.  In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node.  This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.

Now, every list of statements has a consistent head-node type depending
on how far along it is in processing.  This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.

Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc.  That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way.  It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)

Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list.  While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.

The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement.  This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)

There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.

Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes.  This allows
more intelligent handling of cases where a source query string contains
multiple statements.  This patch doesn't actually do anything with the
information, but a follow-on patch will.  (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)

catversion bump because addition of location fields to struct Query
affects stored rules.

This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.

Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 16:02:35 -05:00
Tom Lane 5b29e6b688 In PL/Tcl tests, don't choke if optional error fields are missing.
This fixes a portability issue introduced by commit 961bed020: with a
compiler that doesn't support PG_FUNCNAME_MACRO, the "funcname" field of
errorCode won't be provided, leading to a failure of the unset command.
I added -nocomplain to the unset commands for filename and lineno too, just
in case, though I know of no platform that wouldn't populate those fields.
(BTW, -nocomplain is new in Tcl 8.4, but fortunately we dropped support
for pre-8.4 Tcl some time ago.)

Per buildfarm member pademelon.
2017-01-13 16:59:52 -05:00
Tom Lane 8c5722948e Fix error handling in pltcl_returnnext.
We can't throw elog(ERROR) out of a Tcl command procedure; we have
to catch the error and return TCL_ERROR to the Tcl interpreter.
pltcl_returnnext failed to meet this requirement, so that errors
detected by pltcl_build_tuple_result or other functions called here
led to longjmp'ing out of the Tcl interpreter and thereby leaving it
in a bad state.  Use the existing subtransaction support to prevent
that.  Oversight in commit 26abb50c4, found more or less accidentally
by the buildfarm thanks to the tests added in 961bed020.

Report: https://postgr.es/m/30647.1483989734@sss.pgh.pa.us
2017-01-09 17:47:10 -05:00
Tom Lane 961bed0208 Expand the regression tests for PL/Tcl.
This raises the test coverage (by line count) in pltcl.c from about 70%
to 86%.

Karl Lehenbauer and Jim Nasby

Discussion: https://postgr.es/m/92a1670d-21b6-8f03-9c13-e4fb2207ab7b@BlueTreble.com
2017-01-09 10:10:22 -05:00
Tom Lane de5fed0d0c Merge two copies of tuple-building code in pltcl.c.
Make pltcl_trigger_handler() construct modified tuples using
pltcl_build_tuple_result(), rather than its own copy of essentially
the same logic.  This results in slightly different message wording for
the error cases, and in one case a different SQLSTATE, but it seems
unlikely that any existing applications are depending on any of those
details.

While at it, fix a typo in commit 26abb50c4: pltcl_build_tuple_result was
applying encoding conversion in the wrong direction.  That would be a
back-patchable bug fix, except the code hasn't shipped yet.

Jim Nasby, reviewed by me

Discussion: https://postgr.es/m/d2c6425a-d9e0-f034-f774-4a872c234d89@BlueTreble.com
2017-01-06 16:22:08 -05:00
Peter Eisentraut 933b46644c Use 'use strict' in all Perl programs 2017-01-05 12:34:48 -05:00
Bruce Momjian 1d25779284 Update copyright via script for 2017 2017-01-03 13:48:53 -05:00
Tom Lane 55caaaeba8 Improve handling of array elements as getdiag_targets and cursor_variables.
There's no good reason why plpgsql's GET DIAGNOSTICS statement can't
support an array element as target variable, since the execution code
already uses the generic exec_assign_value() function to assign to it.
Hence, refactor the grammar to allow that, by making getdiag_target
depend on the assign_var production.

Ideally we'd also let a cursor_variable expand to an element of a
refcursor[] array, but that's substantially harder since those statements
also have to handle bound-cursor-variable cases.  For now, just make sure
the reported error is sensible, ie "cursor variable must be a simple
variable" not "variable must be of type cursor or refcursor".  The latter
was quite confusing from the user's viewpoint, since what he wrote
satisfies the claimed restriction.

Per bug #14463 from Zhou Digoal.  Given the lack of previous complaints,
I see no need for a back-patch.

Discussion: https://postgr.es/m/20161213152548.14897.81245@wrigleys.postgresql.org
2016-12-13 16:33:03 -05:00
Tom Lane 9cda81f005 Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't.  This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.

In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions.  And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.

To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.

Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table.  This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.

The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.

The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me.  Back-patch
to all supported branches.

Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
Tom Lane 4ecd197437 Check that result tupdesc has exactly 1 column in return_next scalar case.
This should always be true, but since we're relying on a tuple descriptor
passed from outside pltcl itself, let's check.  Per a gripe from Coverity.
2016-11-15 16:48:19 -05:00
Tom Lane 1833f1a1c3 Simplify code by getting rid of SPI_push, SPI_pop, SPI_restore_connection.
The idea behind SPI_push was to allow transitioning back into an
"unconnected" state when a SPI-using procedure calls unrelated code that
might or might not invoke SPI.  That sounds good, but in practice the only
thing it does for us is to catch cases where a called SPI-using function
forgets to call SPI_connect --- which is a highly improbable failure mode,
since it would be exposed immediately by direct testing of said function.
As against that, we've had multiple bugs induced by forgetting to call
SPI_push/SPI_pop around code that might invoke SPI-using functions; these
are much harder to catch and indeed have gone undetected for years in some
cases.  And we've had to band-aid around some problems of this ilk by
introducing conditional push/pop pairs in some places, which really kind
of defeats the purpose altogether; if we can't draw bright lines between
connected and unconnected code, what's the point?

Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the
underlying state variable _SPI_curid.  It turns out SPI_restore_connection
can go away too, which is a nice side benefit since it was never more than
a kluge.  Provide no-op macros for the deleted functions so as to avoid an
API break for external modules.

A side effect of this removal is that SPI_palloc and allied functions no
longer permit being called when unconnected; they'll throw an error
instead.  The apparent usefulness of the previous behavior was a mirage
as well, because it was depended on by only a few places (which I fixed in
preceding commits), and it posed a risk of allocations being unexpectedly
long-lived if someone forgot a SPI_push call.

Discussion: <20808.1478481403@sss.pgh.pa.us>
2016-11-08 17:39:57 -05:00
Tom Lane 9257f07872 Replace uses of SPI_modifytuple that intend to allocate in current context.
Invent a new function heap_modify_tuple_by_cols() that is functionally
equivalent to SPI_modifytuple except that it always allocates its result
by simple palloc.  I chose however to make the API details a bit more
like heap_modify_tuple: pass a tupdesc rather than a Relation, and use
bool convention for the isnull array.

Use this function in place of SPI_modifytuple at all call sites where the
intended behavior is to allocate in current context.  (There actually are
only two call sites left that depend on the old behavior, which makes me
wonder if we should just drop this function rather than keep it.)

This new function is easier to use than heap_modify_tuple() for purposes
of replacing a single column (or, really, any fixed number of columns).
There are a number of places where it would simplify the code to change
over, but I resisted that temptation for the moment ... everywhere except
in plpgsql's exec_assign_value(); changing that might offer some small
performance benefit, so I did it.

This is on the way to removing SPI_push/SPI_pop, but it seems like
good code cleanup in its own right.

Discussion: <9633.1478552022@sss.pgh.pa.us>
2016-11-08 15:36:44 -05:00
Tom Lane 6d30fb1f75 Make SPI_fnumber() reject dropped columns.
There's basically no scenario where it's sensible for this to match
dropped columns, so put a test for dropped-ness into SPI_fnumber()
itself, and excise the test from the small number of callers that
were paying attention to the case.  (Most weren't :-(.)

In passing, normalize tests at call sites: always reject attnum <= 0
if we're disallowing system columns.  Previously there was a mixture
of "< 0" and "<= 0" tests.  This makes no practical difference since
SPI_fnumber() never returns 0, but I'm feeling pedantic today.

Also, in the places that are actually live user-facing code and not
legacy cruft, distinguish "column not found" from "can't handle
system column".

Per discussion with Jim Nasby; thi supersedes his original patch
that just changed the behavior at one call site.

Discussion: <b2de8258-c4c0-1cb8-7b97-e8538e5c975c@BlueTreble.com>
2016-11-08 13:11:26 -05:00
Tom Lane de4026c673 Use heap_modify_tuple not SPI_modifytuple in pl/python triggers.
The code here would need some change anyway given planned change in
SPI_modifytuple semantics, since this executes after we've exited the
SPI environment.  But really it's better to just use heap_modify_tuple.

While at it, normalize use of SPI_fnumber: make error messages distinguish
no-such-column from can't-set-system-column, and remove test for deleted
column which is going to migrate into SPI_fnumber.  The lack of a check
for system column names is actually a pre-existing bug here, and might
even qualify as a security bug except that we don't have any trusted
version of plpython.
2016-11-08 12:00:24 -05:00
Tom Lane 0d4446083d Use heap_modify_tuple not SPI_modifytuple in pl/perl triggers.
The code here would need some change anyway given planned change in
SPI_modifytuple semantics, since this executes after we've exited the
SPI environment.  But really it's better to just use heap_modify_tuple.
The code's actually shorter this way, and this avoids depending on some
rather indirect reasoning about why the temporary arrays can't be overrun.
(I think the old code is safe, as long as Perl hashes can't contain
duplicate keys; but with this way we don't need that assumption, only
the assumption that SPI_fnumber doesn't return an out-of-range attnum.)

While at it, normalize use of SPI_fnumber: make error messages distinguish
no-such-column from can't-set-system-column, and remove test for deleted
column which is going to migrate into SPI_fnumber.
2016-11-08 11:35:13 -05:00
Tom Lane 7f1bcfb93d Sync pltcl_build_tuple_result's error handling with pltcl_trigger_handler.
Meant to do this in 26abb50c4, but forgot.
2016-11-06 19:22:12 -05:00
Tom Lane 26abb50c49 Support PL/Tcl functions that return composite types and/or sets.
Jim Nasby, rather heavily editorialized by me

Patch: <f2134651-14b3-efeb-f274-c69f3c084031@BlueTreble.com>
2016-11-06 17:56:05 -05:00
Tom Lane 2178cbf40d Modernize result-tuple construction in pltcl_trigger_handler().
Use Tcl_ListObjGetElements instead of Tcl_SplitList.  Aside from being
possibly more efficient in its own right, this means we are no longer
responsible for freeing a malloc'd result array, so we can get rid of
a PG_TRY/PG_CATCH block.

Use heap_form_tuple instead of SPI_modifytuple.  We don't need the
extra generality of the latter, since we're always replacing all
columns.  Nor do we need its memory-context-munging, since at this
point we're already out of the SPI environment.

Per comparison of this code to tuple-building code submitted by Jim Nasby.
I've abandoned the thought of merging the two cases into a single routine,
but we may as well make the older code simpler and faster where we can.
2016-11-06 16:09:57 -05:00
Tom Lane fd2664dcb7 Rationalize and document pltcl's handling of magic ".tupno" array element.
For a very long time, pltcl's spi_exec and spi_execp commands have had
a behavior of storing the current row number as an element of output
arrays, but this was never documented.  Fix that.

For an equally long time, pltcl_trigger_handler had a behavior of silently
ignoring ".tupno" as an output column name, evidently so that the result
of spi_exec could be used directly as a trigger result tuple.  Not sure
how useful that really is, but in any case it's bad that it would break
attempts to use ".tupno" as an actual column name.  We can fix it by not
checking for ".tupno" until after we check for a column name match.  This
comports with the effective behavior of spi_exec[p] that ".tupno" is only
magic when you don't have an actual column named that.

In passing, wordsmith the description of returning modified tuples from
a pltcl trigger.

Noted while working on Jim Nasby's patch to support composite results
from pltcl.  The inability to return trigger tuples using ".tupno" as
a column name is a bug, so back-patch to all supported branches.
2016-11-06 14:43:13 -05:00
Tom Lane fc8b81a291 Need to do SPI_push/SPI_pop around expression evaluation in plpgsql.
We must do this in case the expression evaluation results in calling
another plpgsql function (or, really, anything using SPI).  I missed
the need for this when I converted exec_cast_value() from doing a
simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b.
There is a SPI_push_conditional in InputFunctionCall(), so that there
was no bug before that.

Per bug #14414 from Marcos Castedo.  Add a regression test based on his
example, which was that a plpgsql function in a domain check constraint
didn't work when assigning to a domain-type variable within plpgsql.

Report: <20161106010947.1387.66380@wrigleys.postgresql.org>
2016-11-06 12:09:36 -05:00
Tom Lane 1b00dd0ea0 Improve minor error-handling details in pltcl.
Don't ask Tcl_GetIndexFromObj to store an error message in the interpreter
in cases where the next argument isn't necessarily one of the options
we're asking it to check for.  At best that is a waste of time, and at
worst it might cause an inappropriate error result to get left behind.

Be sure to check for valid syntax (ie, no command arguments) in
pltcl_SPI_lastoid.

Extracted from a larger and otherwise-unrelated patch.

Jim Nasby

Patch: <f2134651-14b3-efeb-f274-c69f3c084031@BlueTreble.com>
2016-11-05 17:32:29 -04:00
Peter Eisentraut eaed88ce12 Add function name to PyArg_ParseTuple()
This causes the supplied function name to appear in any error message,
making the error message friendlier and relieving us from having to
provide our own in some cases.
2016-10-27 15:41:29 -04:00
Peter Eisentraut 84d457edaf Format PL/Python module contents test vertically
It makes it readable again and makes merges more manageable.
2016-10-27 15:41:29 -04:00
Heikki Linnakangas 8eb6337f9f Remove platform-dependent PL/python test case.
Turns out that the output format of Python Decimal isn't totally platform-
independent either. There are other tests for multi-dimensional arrays,
so rather than try to fix this test case, just remove it.

Per buildfarm member prairiedog.
2016-10-26 17:09:18 +03:00
Heikki Linnakangas cfd9c87a54 Only treat Python Lists as array dimensions.
Instead of treating all python sequence types as array dimensions, except
for tuples and various kinds of strings, only treat Python lists as
dimensions. The PyBytes_Check() function used previously is only available
on Python 2.6 and newer, and it was a bit fiddly anyway. The list of
exceptions would require adjustment if Python got a new kind of a sequence
similar to bytes/unicodes/strings, so only checking for Lists seems more
future-proof. The documentation only mentioned using Lists, so this is
closer to what was documented, anyway.

This should fix the buildfarm failures on systems building with Python 2.5,
although I don't have Python 2.5 installed myself to test with.
2016-10-26 14:44:55 +03:00
Heikki Linnakangas 73c8e8506c Avoid using platform-dependent floats in test case.
The number of decimals printed for floats varied in this test case, as
noted by several buildfarm members. There's nothing special about floats
and arrays in the code being tested, so replace the floats with numerics to
make the output platform-independent.
2016-10-26 14:17:07 +03:00
Heikki Linnakangas e131ba4fe5 Fix regression test to also work with Python 2.
Per buildfarm.
2016-10-26 11:18:04 +03:00
Heikki Linnakangas 510e1b8ecf Give a hint, when [] is incorrectly used for a composite type in array.
That used to be accepted, so let's try to give a hint to users on why
their PL/python functions no longer work.

Reviewed by Pavel Stehule.

Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
2016-10-26 10:56:56 +03:00
Heikki Linnakangas 94aceed317 Support multi-dimensional arrays in PL/python.
Multi-dimensional arrays can now be used as arguments to a PL/python function
(used to throw an error), and they can be returned as nested Python lists.

This makes a backwards-incompatible change to the handling of composite
types in arrays. Previously, you could return an array of composite types
as "[[col1, col2], [col1, col2]]", but now that is interpreted as a two-
dimensional array. Composite types in arrays must now be returned as
Python tuples, not lists, to resolve the ambiguity. I.e. "[(col1, col2),
(col1, col2)]".

To avoid breaking backwards-compatibility, when not necessary, () is still
accepted for arrays at the top-level, but it is always treated as a
single-dimensional array. Likewise, [] is still accepted for composite types,
when they are not in an array. Update the documentation to recommend using []
for arrays, and () for composite types, with a mention that those other things
are also accepted in some contexts.

This needs to be mentioned in the release notes.

Alexey Grishchenko, Dave Cramer and me. Reviewed by Pavel Stehule.

Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
2016-10-26 10:56:30 +03:00
Andres Freund ccbb852cd6 Fix further hash table order dependent tests.
Similar to 0137caf273, this makes contrib and pl tests less dependant on
hash-table order.  After this commit, at least some order affecting
changes to execGrouping.c don't result in regression test changes
anymore.
2016-10-12 18:31:45 -07:00
Tom Lane 7107d58ec5 Fix misplacement of submake-generated-headers prerequisites.
The sequence "configure; cd src/pl/plpython; make -j" failed due to
trying to compile plpython's .o files before the generated headers
finished building.  (This is an important real-world case, since it's
the typical second step when building both plpython2 and plpython3.)
This happens because the submake-generated-headers target is not
placed in a way to make it a prerequisite to compiling the .o files.
Fix that.

Checking other uses of submake-generated-headers, I noted that the one
attached to pg_regress was similarly misplaced; but it's actually not
needed at all for pg_regress.o, rather regress.o, so move it to be a
prerequisite of that.

Back-patch to 9.6 where submake-generated-headers was introduced
(by commit 548af97fc).  It's not immediately clear to me why the
previous coding didn't have the same issue; but since we've not
had field reports of plpython make failing, leave it alone in the
older branches.

Pavel Raiskup and Tom Lane

Discussion: <1925924.izSMJEZO3x@unused-4-107.brq.redhat.com>
2016-10-01 13:35:13 -04:00
Tom Lane a4c35ea1c2 Improve parser's and planner's handling of set-returning functions.
Teach the parser to reject misplaced set-returning functions during parse
analysis using p_expr_kind, in much the same way as we do for aggregates
and window functions (cf commit eaccfded9).  While this isn't complete
(it misses nesting-based restrictions), it's much better than the previous
error reporting for such cases, and it allows elimination of assorted
ad-hoc expression_returns_set() error checks.  We could add nesting checks
later if it seems important to catch all cases at parse time.

There is one case the parser will now throw error for although previous
versions allowed it, which is SRFs in the tlist of an UPDATE.  That never
behaved sensibly (since it's ill-defined which generated row should be
used to perform the update) and it's hard to see why it should not be
treated as an error.  It's a release-note-worthy change though.

Also, add a new Query field hasTargetSRFs reporting whether there are
any SRFs in the targetlist (including GROUP BY/ORDER BY expressions).
The parser can now set that basically for free during parse analysis,
and we can use it in a number of places to avoid expression_returns_set
searches.  (There will be more such checks soon.)  In some places, this
allows decontorting the logic since it's no longer expensive to check for
SRFs in the tlist --- so I made the checks parallel to the handling of
hasAggs/hasWindowFuncs wherever it seemed appropriate.

catversion bump because adding a Query field changes stored rules.

Andres Freund and Tom Lane

Discussion: <24639.1473782855@sss.pgh.pa.us>
2016-09-13 13:54:24 -04:00
Peter Eisentraut e0013deb59 Make better use of existing enums in plpgsql
plpgsql.h defines a number of enums, but most of the code passes them
around as ints.  Update structs and function prototypes to take enum
types instead.  This clarifies the struct definitions in plpgsql.h in
particular.

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2016-09-09 12:00:00 -04:00
Tom Lane 6f7c0ea32f Improve memory management for PL/Perl functions.
Unlike PL/Tcl, PL/Perl at least made an attempt to clean up after itself
when a function gets redefined.  But it was still using TopMemoryContext
for the fn_mcxt of argument/result I/O functions, resulting in the
potential for memory leaks depending on what those functions did, and the
retail alloc/free logic was pretty bulky as well.  Fix things to use a
per-function memory context like the other PLs now do.  Tweak a couple of
places where things were being done in a not-very-safe order (on the
principle that a memory leak is better than leaving global state
inconsistent after an error).  Also make some minor cosmetic adjustments,
mostly in field names, to make the code look similar to the way PL/Tcl does
now wherever it's essentially the same logic.

Michael Paquier and Tom Lane

Discussion: <CAB7nPqSOyAsHC6jL24J1B+oK3p=yyNoFU0Vs_B6fd2kdd5g5WQ@mail.gmail.com>
2016-08-31 19:54:58 -04:00
Tom Lane d062245b5b Improve memory management for PL/Tcl functions.
Formerly, the memory used to represent a PL/Tcl function was allocated with
malloc() or in TopMemoryContext, and we'd leak it all if the function got
redefined during the session.  Instead, create a per-function context and
keep everything in or under that context.  Add a reference-counting
mechanism (like the one plpgsql has long had) so that we can safely clean
up an old function definition, either immediately if it's not being
executed or at the end of the outermost execution.

Currently, we only detect that a cached function is obsolete when we next
attempt to call that function.  So this covers the updated-definition case
but leaves cruft around after DROP FUNCTION.  It's not clear whether it's
worth installing a syscache invalidation callback to watch for drops;
none of the other PLs do, so for now we won't do it here either.

Michael Paquier and Tom Lane

Discussion: <CAB7nPqSOyAsHC6jL24J1B+oK3p=yyNoFU0Vs_B6fd2kdd5g5WQ@mail.gmail.com>
2016-08-31 17:27:09 -04:00
Tom Lane ea268cdc9a Add macros to make AllocSetContextCreate() calls simpler and safer.
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters.  While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea.  Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.

While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.

In passing, change TopMemoryContext to use the default allocation
parameters.  Formerly it could only be extended 8K at a time.  That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there.  There seems no good reason
not to let it use growing blocks like most other contexts.

Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain.  The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.

Discussion: <21072.1472321324@sss.pgh.pa.us>
2016-08-27 17:50:38 -04:00
Tom Lane 5697522d84 In plpgsql, don't try to convert int2vector or oidvector to expanded array.
These types are storage-compatible with real arrays, but they don't support
toasting, so of course they can't support expansion either.

Per bug #14289 from Michael Overmeyer.  Back-patch to 9.5 where expanded
arrays were introduced.

Report: <20160818174414.1529.37913@wrigleys.postgresql.org>
2016-08-18 14:49:08 -04:00
Peter Eisentraut 9f31e45a6d Improve formatting of comments in plpgsql.h
This file had some unusual comment layout.  Most of the comments
introducing structs ended up to the right of the screen and following
the start of the struct.  Some comments for struct members ended up
after the member definition.

Fix that by moving comments consistently before what they are
describing.  Also add missing struct tags where missing so that it is
easier to tell what the struct is.
2016-08-18 12:00:00 -04:00
Tom Lane bfaaacc805 Improve plpgsql's memory management to fix some function-lifespan leaks.
In some cases, exiting out of a plpgsql statement due to an error, then
catching the error in a surrounding exception block, led to leakage of
temporary data the statement was working with, because we kept all such
data in the function-lifespan SPI Proc context.  Iterating such behavior
many times within one function call thus led to noticeable memory bloat.

To fix, create an additional memory context meant to have statement
lifespan.  Since many plpgsql statements, particularly the simpler/more
common ones, don't need this, create it only on demand.  Reset this context
at the end of any statement that uses it, and arrange for exception cleanup
to reset it too, thereby fixing the memory-leak issue.  Allow a stack of
such contexts to exist to handle cases where a compound statement needs
statement-lifespan data that persists across calls of inner statements.

While at it, clean up code and improve comments referring to the existing
short-term memory context, which by plpgsql convention is the per-tuple
context of the eval_econtext ExprContext.  We now uniformly refer to that
as the eval_mcontext, whereas the new statement-lifespan memory contexts
are called stmt_mcontext.

This change adds some context-creation overhead, but on the other hand
it allows removal of some retail pfree's in favor of context resets.
On balance it seems to be about a wash performance-wise.

In principle this is a bug fix, but it seems too invasive for a back-patch,
and the infrequency of complaints weighs against taking the risk in the
back branches.  So we'll fix it only in HEAD, at least for now.

Tom Lane, reviewed by Pavel Stehule

Discussion: <17863.1469142152@sss.pgh.pa.us>
2016-08-17 14:51:10 -04:00
Tom Lane 0bb51aa967 Improve parsetree representation of special functions such as CURRENT_DATE.
We implement a dozen or so parameterless functions that the SQL standard
defines special syntax for.  Up to now, that was done by converting them
into more or less ad-hoc constructs such as "'now'::text::date".  That's
messy for multiple reasons: it exposes what should be implementation
details to users, and performance is worse than it needs to be in several
cases.  To improve matters, invent a new expression node type
SQLValueFunction that can represent any of these parameterless functions.

Bump catversion because this changes stored parsetrees for rules.

Discussion: <30058.1463091294@sss.pgh.pa.us>
2016-08-16 20:33:01 -04:00
Tom Lane b5bce6c1ec Final pgindent + perltidy run for 9.6. 2016-08-15 13:42:51 -04:00
Peter Eisentraut 34927b2920 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: cda21c1d7b160b303dc21dfe9d4169f2c8064c60
2016-08-08 11:08:00 -04:00
Tom Lane 95810ed8ee Make pltcl regression tests safe for Danish locale.
Another peculiarity of Danish locale is that it has an unusual idea
of how to sort upper vs. lower case.  One of the pltcl test cases has
an issue with that.  Now that COLLATE works in all supported branches,
we can just change the test to be locale-independent, and get rid of
the variant expected file that used to support non-C locales.
2016-07-21 14:24:07 -04:00
Peter Eisentraut 7d67606569 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3d71988dffd3c0798a8864c55ca4b7833b48abb1
2016-07-18 12:07:49 -04:00
Tom Lane baebab3ace Allow IMPORT FOREIGN SCHEMA within pl/pgsql.
Since IMPORT FOREIGN SCHEMA has an INTO clause, pl/pgsql needs to be
aware of that and avoid capturing the INTO as an INTO-variables clause.
This isn't hard, though it's annoying to have to make IMPORT a plpgsql
keyword just for this.  (Fortunately, we have the infrastructure now
to make it an unreserved keyword, so at least this shouldn't break any
existing pl/pgsql code.)

Per report from Merlin Moncure.  Back-patch to 9.5 where IMPORT FOREIGN
SCHEMA was introduced.

Report: <CAHyXU0wpHf2bbtKGL1gtUEFATCY86r=VKxfcACVcTMQ70mCyig@mail.gmail.com>
2016-07-12 18:07:03 -04:00
Peter Eisentraut 3a4a33ad49 PL/Python: Report argument parsing errors using exceptions
Instead of calling PLy_elog() for reporting Python argument parsing
errors, generate appropriate exceptions.  This matches the existing plpy
functions and is more consistent with the behavior of the Python
argument parsing routines.
2016-07-02 22:53:14 -04:00
Tom Lane 548af97fce Provide and use a makefile target to build all generated headers.
As of 9.6, pg_regress doesn't build unless storage/lwlocknames.h has been
created; but there was nothing forcing that to happen if you just went into
src/test/regress/ and built there.  We previously had a similar complaint
about plpython.

To fix in a way that won't break next time we invent a generated header,
make src/backend/Makefile expose a phony target for updating all the
include files it builds, and invoke that before building pg_regress or
plpython.  In principle, maybe we ought to invoke that everywhere; but
it would add a lot of usually-useless make cycles, so let's just do it
in the places where people have complained.

I made a couple of cosmetic adjustments in src/backend/Makefile as well,
to deal with the generated headers in consistent orders.

Michael Paquier and Tom Lane

Report: <31398.1467036827@sss.pgh.pa.us>
Report: <20150916200959.GB32090@msg.df7cb.de>
2016-07-01 15:09:02 -04:00
Tom Lane 1fe1204e87 Add missing check for malloc failure in plpgsql_extra_checks_check_hook().
Per report from Andreas Seltenreich.  Back-patch to affected versions.

Report: <874m8nn0hv.fsf@elite.ansel.ydns.eu>
2016-06-20 15:36:54 -04:00
Peter Eisentraut 47981a4665 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 0c374f8d25ed31833a10d24252bc928d41438838
2016-06-20 09:48:08 -04:00
Peter Eisentraut f0688d6e6c PL/Python: Clean up extended error reporting docs and tests
Format the example and test code more to Python style standards.
Improve whitespace.  Improve documentation formatting.
2016-06-15 10:34:11 -04:00
Peter Eisentraut 020140d84d PL/Python: Rename new keyword arguments of plpy.error() etc.
Rename schema -> schema_name etc. to remain consistent with C API and
PL/pgSQL.
2016-06-11 19:27:49 -04:00
Robert Haas 4bc424b968 pgindent run for 9.6 2016-06-09 18:02:36 -04:00
Peter Eisentraut 8359077124 PL/Python: Move ereport wrapper test cases to separate file
In commit 5c3c3cd0a3, the new tests were
apparently just dumped into the first convenient file.  Move them to a
separate file dedicated to testing that functionality and leave the
plpython_test test to test basic functionality, as it did before.
2016-06-07 09:39:11 -04:00
Peter Eisentraut 48aaba4acf Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 17bf3e8564abf600274789fcc90e72532d5e7c05
2016-05-09 10:04:41 -04:00
Tom Lane 1d2f9de38d Fix freshly-introduced PL/Python portability bug.
It turns out that those PyErr_Clear() calls I removed from plpy_elog.c
in 7e3bb08038 et al were not quite as random as they appeared: they
mask a Python 2.3.x bug.  (Specifically, it turns out that PyType_Ready()
can fail if the error indicator is set on entry, and PLy_traceback's fetch
of frame.f_code may be the first operation in a session that requires the
"frame" type to be readied.  Ick.)  Put back the clear call, but in a more
centralized place closer to what it's protecting, and this time with a
comment warning what it's really for.

Per buildfarm member prairiedog.  Although prairiedog was only failing
on HEAD, it seems clearly possible for this to occur in older branches
as well, so back-patch to 9.2 the same as the previous patch.
2016-04-11 18:17:20 -04:00
Peter Eisentraut d8ed83cd7f Fix whitespace 2016-04-11 14:44:51 -04:00
Tom Lane 81ba9348d8 Fix missing "volatile" in PLy_output().
Commit 5c3c3cd0a3 plastered "volatile" on a bunch of variables
in PLy_output(), but removed the one that actually mattered, ie the
one on "oldcontext".  This allows some versions of clang to generate
code in which "oldcontext" has been trashed when control reaches the
PG_CATCH block.  Per buildfarm member tick.
2016-04-11 11:49:54 -04:00
Tom Lane f73b2bbbdc Fix poorly thought-through code from commit 5c3c3cd0a3.
It's not entirely clear to me whether PyString_AsString can return
null (looks like the answer might vary between Python 2 and 3).
But in any case, this code's attempt to cope with the possibility
was quite broken, because pstrdup() neither allows a null argument
nor ever returns a null.

Moreover, the code below this point assumes that "message" is a
palloc'd string, which would not be the case for a dgettext result.

Fix both problems by doing the pstrdup step separately.
2016-04-11 00:28:44 -04:00
Tom Lane 7e3bb08038 Fix access-to-already-freed-memory issue in plpython's error handling.
PLy_elog() could attempt to access strings that Python had already freed,
because the strings that PLy_get_spi_error_data() returns are simply
pointers into storage associated with the error "val" PyObject.  That's
fine at the instant PLy_get_spi_error_data() returns them, but just after
that PLy_traceback() intentionally releases the only refcount on that
object, allowing it to be freed --- so that the strings we pass to
ereport() are dangling pointers.

In principle this could result in garbage output or a coredump.  In
practice, I think the risk is pretty low, because there are no Python
operations between where we decrement that refcount and where we use the
strings (and copy them into PG storage), and thus no reason for Python
to recycle the storage.  Still, it's clearly hazardous, and it leads to
Valgrind complaints when running under a Valgrind that hasn't been
lobotomized to ignore Python memory allocations.

The code was a mess anyway: we fetched the error data out of Python
(clearing Python's error indicator) with PyErr_Fetch, examined it, pushed
it back into Python with PyErr_Restore (re-setting the error indicator),
then immediately pulled it back out with another PyErr_Fetch.  Just to
confuse matters even more, there were some gratuitous-and-yet-hazardous
PyErr_Clear calls in the "examine" step, and we didn't get around to doing
PyErr_NormalizeException until after the second PyErr_Fetch, making it even
less clear which object was being manipulated where and whether we still
had a refcount on it.  (If PyErr_NormalizeException did substitute a
different "val" object, it's possible that the problem could manifest for
real, because then we'd be doing assorted Python stuff with no refcount
on the object we have string pointers into.)

So, rearrange all that into some semblance of sanity, and don't decrement
the refcount on the Python error objects until the end of PLy_elog().
In HEAD, I failed to resist the temptation to reformat some messy bits
from 5c3c3cd0a3 along the way.

Back-patch as far as 9.2, because the code is substantially the same
that far back.  I believe that 9.1 has the bug as well; but the code
around it is rather different and I don't want to take a chance on
breaking something for what seems a low-probability problem.
2016-04-10 23:16:10 -04:00
Tom Lane c7a141a986 Fix PL/Python ereport() test to work on Python 2.3.
Per buildfarm.

Pavel Stehule
2016-04-09 16:44:54 -04:00
Teodor Sigaev 5c3c3cd0a3 Enhanced custom error in PLPythonu
Patch adds a new, more rich,  way to emit error message or exception from
PL/Pythonu code.

Author: Pavel Stehule
Reviewers: Catalin Iacob, Peter Eisentraut, Jim Nasby
2016-04-08 18:33:06 +03:00
Tom Lane 1d2fe56e42 Fix PL/Python for recursion and interleaved set-returning functions.
PL/Python failed if a PL/Python function was invoked recursively via SPI,
since arguments are passed to the function in its global dictionary
(a horrible decision that's far too ancient to undo) and it would delete
those dictionary entries on function exit, leaving the outer recursion
level(s) without any arguments.  Not deleting them would be little better,
since the outer levels would then see the innermost level's arguments.

Since PL/Python uses ValuePerCall mode for evaluating set-returning
functions, it's possible for multiple executions of the same SRF to be
interleaved within a query.  PL/Python failed in such a case, because
it stored only one iterator per function, directly in the function's
PLyProcedure struct.  Moreover, one interleaved instance of the SRF
would see argument values that should belong to another.

Hence, invent code for saving and restoring the argument entries.  To fix
the recursion case, we only need to save at recursive entry and restore
at recursive exit, so the overhead in non-recursive cases is negligible.
To fix the SRF case, we have to save when suspending a SRF and restore
when resuming it, which is potentially not negligible; but fortunately
this is mostly a matter of manipulating Python object refcounts and
should not involve much physical data copying.

Also, store the Python iterator and saved argument values in a structure
associated with the SRF call site rather than the function itself.  This
requires adding a memory context deletion callback to ensure that the SRF
state is cleaned up if the calling query exits before running the SRF to
completion.  Without that we'd leak a refcount to the iterator object in
such a case, resulting in session-lifespan memory leakage.  (In the
pre-existing code, there was no memory leak because there was only one
iterator pointer, but what would happen is that the previous iterator
would be resumed by the next query attempting to use the SRF.  Hardly the
semantics we want.)

We can buy back some of whatever overhead we've added by getting rid of
PLy_function_delete_args(), which seems a useless activity: there is no
need to delete argument entries from the global dictionary on exit,
since the next time anyone would see the global dict is on the next
fresh call of the PL/Python function, at which time we'd overwrite those
entries with new arg values anyway.

Also clean up some really ugly coding in the SRF implementation, including
such gems as returning directly out of a PG_TRY block.  (The only reason
that failed to crash hard was that all existing call sites immediately
exited their own PG_TRY blocks, popping the dangling longjmp pointer before
there was any chance of it being used.)

In principle this is a bug fix; but it seems a bit too invasive relative to
its value for a back-patch, and besides the fix depends on memory context
callbacks so it could not go back further than 9.5 anyway.

Alexey Grishchenko and Tom Lane
2016-04-05 14:51:19 -04:00
Noah Misch 4ad6f13500 Copyedit comments and documentation. 2016-04-01 21:53:10 -04:00
Tom Lane 9f73a2f6d1 Fix PL/Tcl for vpath builds.
Commit cd37bb7859 works for in-tree builds, but not so much for
VPATH.  Per buildfarm.
2016-03-25 17:13:03 -04:00
Tom Lane cd37bb7859 Improve PL/Tcl errorCode facility by providing decoded name for SQLSTATE.
We don't really want to encourage people to write numeric SQLSTATEs in
programs; that's unreadable and error-prone.  Copy plpgsql's infrastructure
for converting between SQLSTATEs and exception names shown in Appendix A,
and modify examples in tests and documentation to do it that way.
2016-03-25 16:54:52 -04:00
Tom Lane fb8d2a7f57 In PL/Tcl, make database errors return additional info in the errorCode.
Tcl has a convention for returning additional info about an error in a
global variable named errorCode.  Up to now PL/Tcl has ignored that,
but this patch causes database errors caught by PL/Tcl to fill in
errorCode with useful information from the ErrorData struct.

Jim Nasby, reviewed by Pavel Stehule and myself
2016-03-25 15:52:53 -04:00
Tom Lane 07341a2980 Update PL/Perl's comment about hv_store().
Negative klen is documented since Perl 5.16, and 5.6 is no longer
supported so no need to comment about it.

Dagfinn Ilmari Mannsåker
2016-03-14 14:45:45 -04:00
Tom Lane f3f3aae4b7 Improve conversions from uint64 to Perl types.
Perl's integers are pointer-sized, so can hold more than INT_MAX on LP64
platforms, and come in both signed (IV) and unsigned (UV).  Floating
point values (NV) may also be larger than double.

Since Perl 5.19.4 array indices are SSize_t instead of I32, so allow up
to SSize_t_max on those versions.  The limit is not imposed just by
av_extend's argument type, but all the array handling code, so remove
the speculative comment.

Dagfinn Ilmari Mannsåker
2016-03-14 14:38:44 -04:00
Tom Lane 23a27b039d Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout.  Some of these values were declared uint32 before, and
others "long".

I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.

The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command.  Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.

Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long".  It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.

Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 16:05:29 -05:00
Andres Freund e66197fa2e plperl: Correctly handle empty arrays in plperl_ref_from_pg_array.
plperl_ref_from_pg_array() didn't consider the case that postgrs arrays
can have 0 dimensions (when they're empty) and accessed the first
dimension without a check. Fix that by special casing the empty array
case.

Author: Alex Hunsaker
Reported-By: Andres Freund / valgrind / buildfarm animal skink
Discussion: 20160308063240.usnzg6bsbjrne667@alap3.anarazel.de
Backpatch: 9.1-
2016-03-08 13:42:57 -08:00
Magnus Hagander 6c90996a4c Add prefix to pl/pgsql global variables and functions
Rename pl/pgsql global variables to always have a plpgsql_ prefix,
so they don't conflict with other shared libraries loaded.
2016-03-03 10:45:59 +01:00
Tom Lane c8c7c93de8 Fix PL/Tcl's encoding conversion logic.
PL/Tcl appears to contain logic to convert strings between the database
encoding and UTF8, which is the only encoding modern Tcl will deal with.
However, that code has been disabled since commit 034895125d, which
made it "#if defined(UNICODE_CONVERSION)" and neglected to provide any way
for that symbol to become defined.  That might have been all right back
in 2001, but these days we take a dim view of allowing strings with
incorrect encoding into the database.

Remove the conditional compilation, fix warnings about signed/unsigned char
conversions, clean up assorted places that didn't bother with conversions.
(Notably, there were lots of assumptions that database table and field
names didn't need conversion...)

Add a regression test based on plpython_unicode.  It's not terribly
thorough, but better than no test at all.
2016-03-02 13:30:14 -05:00
Tom Lane e2609323eb Make PL/Tcl require Tcl 8.4 or later.
As of commit 2878220682, PL/Tcl will not
compile against pre-8.0 Tcl, whereas it used to work (more or less anyway)
with quite prehistoric versions.  As long as we're moving these goalposts,
let's reinstall them at someplace that has some thought behind it.  This
commit sets the minimum allowed Tcl version at 8.4, and rips out some bits
of compatibility cruft that are in consequence no longer needed.  Reasons
for requiring 8.4 include:

* 8.4 was released in 2002; there seems little reason to believe that
anyone would want to use older versions with Postgres 9.6+.

* We have no buildfarm members testing anything older than 8.4, and
thus no way to know if it's broken.

* We need at least 8.1 to allow enforcement of database encoding
security (8.1 standardized Tcl on using UTF8 internally, before that
it was pretty unpredictable).

* Some versions between 8.1 and 8.4 allowed the backend to become
multithreaded, which is disastrous.  We need at least 8.4 to be able
to disable the Tcl notifier subsystem to prevent that.

A small side benefit is that we can make the code more readable by
doing s/CONST84/const/g.
2016-03-02 12:24:30 -05:00
Tom Lane 2878220682 Convert PL/Tcl to use Tcl's "object" interfaces.
The original implementation of Tcl was all strings, but they improved
performance significantly by introducing typed "objects" (integers,
lists, code, etc).  It's past time we made use of that; that happened
in Tcl 8.0 which was released in 1997.

This patch also modernizes some of the error-reporting code, which may
cause small changes in the spelling of complaints about bad calls to
PL/Tcl-provided commands.

Jim Nasby and Karl Lehenbauer, reviewed by Victor Wagner
2016-03-02 12:07:31 -05:00
Tom Lane 68c521eb92 Improve coverage of pltcl regression tests.
Test composite-type arguments and the argisnull and spi_lastoid Tcl
commmands.  This stuff was not covered before, but needs to be exercised
since the upcoming Tcl object-conversion patch changes these code paths
(and broke at least one of them).
2016-03-01 20:01:16 -05:00
Tom Lane 66f503868b Make plpython cope with funny characters in function names.
A function name that's double-quoted in SQL can contain almost any
characters, but we were using that name directly as part of the name
generated for the Python-level function, and Python doesn't like
anything that isn't pretty much a standard identifier.  To fix,
replace anything that isn't an ASCII letter or digit with an underscore
in the generated name.  This doesn't create any risk of duplicate Python
function names because we were already appending the function OID to
the generated name to ensure uniqueness.  Per bug #13960 from Jim Nasby.

Patch by Jim Nasby, modified a bit by me.  Back-patch to all
supported branches.
2016-02-16 21:08:15 -05:00
Tom Lane 796d1e889f Remove no-longer-needed old-style check for incompatible plpythons.
Commit 866566a690 introduced a new mechanism for incompatible
plpythons to detect each other.  I left the old mechanism in place,
because it seems possible that a plpython predating that commit might be
used with one postdating it.  (This would require updating plpython3 but
not plpython2 or vice versa, but that seems well within the realm of
possibility.)  However, surely it will not be able to happen in 9.6 or
later, so we can delete the old mechanism in HEAD.
2016-01-11 20:13:31 -05:00
Tom Lane 866566a690 Avoid dump/reload problems when using both plpython2 and plpython3.
Commit 803716013d installed a safeguard against loading plpython2
and plpython3 at the same time, but asserted that both could still be
used in the same database, just not in the same session.  However, that's
not actually all that practical because dumping and reloading will fail
(since both libraries necessarily get loaded into the restoring session).
pg_upgrade is even worse, because it checks for missing libraries by
loading every .so library mentioned in the entire installation into one
session, so that you can have only one across the whole cluster.

We can improve matters by not throwing the error immediately in _PG_init,
but only when and if we're asked to do something that requires calling
into libpython.  This ameliorates both of the above situations, since
while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in
loading plpython, it isn't asked to do anything interesting (at least
not if check_function_bodies is off, as it will be during a restore).

It's possible that this opens some corner-case holes in which a crash
could be provoked with sufficient effort.  However, since plpython
only exists as an untrusted language, any such crash would require
superuser privileges, making it "don't do that" not a security issue.
To reduce the hazards in this area, the error is still FATAL when it
does get thrown.

Per a report from Paul Jones.  Back-patch to 9.2, which is as far back
as the patch applies without work.  (It could be made to work in 9.1,
but given the lack of previous complaints, I'm disinclined to expend
effort so far back.  We've been pretty desultory about support for
Python 3 in 9.1 anyway.)
2016-01-11 19:55:39 -05:00
Bruce Momjian ee94300446 Update copyright for 2016
Backpatch certain files through 9.1
2016-01-02 13:33:40 -05:00
Noah Misch d4b686af0b Instruct Coverity using an assertion.
This should make Coverity deduce that plperl_call_perl_func() does not
dereference NULL argtypes.  Back-patch to 9.5, where the affected code
was introduced.

Michael Paquier
2015-12-05 03:04:17 -05:00
Tom Lane 9be3a4e24d Fix thinko: errmsg -> ereport.
Silly mistake in my commit 09cecdf285, reported by Erik Rijkers.

The fact that the buildfarm didn't find this implies that we are not
testing Perl builds that lack MULTIPLICITY, which is a bit disturbing
from a coverage standpoint.  Until today I'd have said nobody cared
about such configurations anymore; but maybe not.
2015-11-19 14:16:39 -05:00
Tom Lane 8c75ad436f Fix memory leaks in PL/Python.
Previously, plpython was in the habit of allocating a lot of stuff in
TopMemoryContext, and it was very slipshod about making sure that stuff
got cleaned up; in particular, use of TopMemoryContext as fn_mcxt for
function calls represents an unfixable leak, since we generally don't
know what the called function might have allocated in fn_mcxt.  This
results in session-lifespan leakage in certain usage scenarios, as for
example in a case reported by Ed Behn back in July.

To fix, get rid of all the retail allocations in TopMemoryContext.
All long-lived allocations are now made in sub-contexts that are
associated with specific objects (either pl/python procedures, or
Python-visible objects such as cursors and plans).  We can clean these
up when the associated object is deleted.

I went so far as to get rid of PLy_malloc completely.  There were a
couple of places where it could still have been used safely, but on
the whole it was just an invitation to bad coding.

Haribabu Kommi, based on a draft patch by Heikki Linnakangas;
some further work by me
2015-11-05 13:52:40 -05:00
Robert Haas 1efc7e5382 Fix problems with ParamListInfo serialization mechanism.
Commit d1b7c1ffe7 introduced a mechanism
for serializing a ParamListInfo structure to be passed to a parallel
worker.  However, this mechanism failed to handle external expanded
values, as pointed out by Noah Misch.  Repair.

Moreover, plpgsql_param_fetch requires adjustment because the
serialization mechanism needs it to skip evaluating unused parameters
just as we would do when it is called from copyParamList, but params
== estate->paramLI in that case.  To fix, make the bms_is_member test
in that function unconditional.

Finally, have setup_param_list set a new ParamListInfo field,
paramMask, to the parameters actually used in the expression, so that
we don't try to fetch those that are not needed when serializing a
parameter list.  This isn't necessary for correctness, but it makes
the performance of the parallel executor code comparable to what we
do for cases involving cursors.

Design suggestions and extensive review by Noah Misch.  Patch by me.
2015-11-02 18:11:29 -05:00
Andres Freund 86b1e6784b Fix hstore_plpython test when python3 is used.
Due to b67aaf21e8 / CREATE EXTENSION ... CASCADE the test output
contains the extension name in yet another place. Since that's variable
depending on the python version...

Add yet another name mangling stanza to regress-python3-mangle.mk.

Author: Petr Jelinek
2015-10-04 22:29:03 +02:00
Tom Lane b631a46ed8 Fix plperl to handle non-ASCII error message texts correctly.
We were passing error message texts to croak() verbatim, which turns out
not to work if the text contains non-ASCII characters; Perl mangles their
encoding, as reported in bug #13638 from Michal Leinweber.  To fix, convert
the text into a UTF8-encoded SV first.

It's hard to test this without risking failures in different database
encodings; but we can follow the lead of plpython, which is already
assuming that no-break space (U+00A0) has an equivalent in all encodings
we care about running the regression tests in (cf commit 2dfa15de5).

Back-patch to 9.1.  The code is quite different in 9.0, and anyway it seems
too risky to put something like this into 9.0's final minor release.

Alex Hunsaker, with suggestions from Tim Bunce and Tom Lane
2015-09-29 10:52:22 -04:00
Robert Haas 7aea8e4f2d Determine whether it's safe to attempt a parallel plan for a query.
Commit 924bcf4f16 introduced a framework
for parallel computation in PostgreSQL that makes most but not all
built-in functions safe to execute in parallel mode.  In order to have
parallel query, we'll need to be able to determine whether that query
contains functions (either built-in or user-defined) that cannot be
safely executed in parallel mode.  This requires those functions to be
labeled, so this patch introduces an infrastructure for that.  Some
functions currently labeled as safe may need to be revised depending on
how pending issues related to heavyweight locking under paralllelism
are resolved.

Parallel plans can't be used except for the case where the query will
run to completion.  If portal execution were suspended, the parallel
mode restrictions would need to remain in effect during that time, but
that might make other queries fail.  Therefore, this patch introduces
a framework that enables consideration of parallel plans only when it
is known that the plan will be run to completion.  This probably needs
some refinement; for example, at bind time, we do not know whether a
query run via the extended protocol will be execution to completion or
run with a limited fetch count.  Having the client indicate its
intentions at bind time would constitute a wire protocol break.  Some
contexts in which parallel mode would be safe are not adjusted by this
patch; the default is not to try parallel plans except from call sites
that have been updated to say that such plans are OK.

This commit doesn't introduce any parallel paths or plans; it just
provides a way to determine whether they could potentially be used.
I'm committing it on the theory that the remaining parallel sequential
scan patches will also get committed to this release, hopefully in the
not-too-distant future.

Robert Haas and Amit Kapila.  Reviewed (in earlier versions) by Noah
Misch.
2015-09-16 15:38:47 -04:00
Tom Lane 4d0fc1d54b Don't use "#" as an abbreviation for "number" in PL/Tcl error messages.
Also, rewrite one error message to make it follow our message style
guidelines better.

Euler Taveira and Tom Lane
2015-09-16 12:08:57 -04:00
Tom Lane 0426f349ef Rearrange the handling of error context reports.
Remove the code in plpgsql that suppressed the innermost line of CONTEXT
for messages emitted by RAISE commands.  That was never more than a quick
backwards-compatibility hack, and it's pretty silly in cases where the
RAISE is nested in several levels of function.  What's more, it violated
our design theory that verbosity of error reports should be controlled
on the client side not the server side.

To alleviate the resulting noise increase, introduce a feature in libpq
and psql whereby the CONTEXT field of messages can be suppressed, either
always or only for non-error messages.  Printing CONTEXT for errors only
is now their default behavior.

The actual code changes here are pretty small, but the effects on the
regression test outputs are widespread.  I had to edit some of the
alternative expected outputs by hand; hopefully the buildfarm will soon
find anything I fat-fingered.

In passing, fix up (again) the output line counts in psql's various
help displays.  Add some commentary about how to verify them.

Pavel Stehule, reviewed by Petr Jelínek, Jeevan Chalke, and others
2015-09-05 11:58:33 -04:00
Tom Lane 781ed2bfa3 Further tweak wording of error messages about bad CONTINUE/EXIT statements.
Per discussion, a little more verbosity seems called for.
2015-08-25 14:06:13 -04:00
Tom Lane 18391a8f06 Tweak wording of syntax error messages about bad CONTINUE/EXIT statements.
Try to avoid any possible confusion about what these messages mean.
2015-08-23 17:34:47 -04:00
Tom Lane fcdfce6820 Detect mismatched CONTINUE and EXIT statements at plpgsql compile time.
With a bit of tweaking of the compile namestack data structure, we can
verify at compile time whether a CONTINUE or EXIT is legal.  This is
surely better than leaving it to runtime, both because earlier is better
and because we can issue a proper error pointer.  Also, we can get rid
of the ad-hoc old way of detecting the problem, which only took care of
CONTINUE not EXIT.

Jim Nasby, adjusted a bit by me
2015-08-21 20:17:19 -04:00
Tom Lane f469f634ad Fix plpython crash when returning string representation of a RECORD result.
PLyString_ToComposite() blithely overwrote proc->result.out.d, even though
for a composite result type the other union variant proc->result.out.r is
the one that should be valid.  This could result in a crash if out.r had
in fact been filled in (proc->result.is_rowtype == 1) and then somebody
later attempted to use that data; as per bug #13579 from Paweł Michalak.

Just to add insult to injury, it didn't work for RECORD results anyway,
because record_in() would refuse the case.

Fix by doing the I/O function lookup in a local PLyTypeInfo variable,
as we were doing already in PLyObject_ToComposite().  This is not a great
technique because any fn_extra data allocated by the input function will
be leaked permanently (thanks to using TopMemoryContext as fn_mcxt).
But that's a pre-existing issue that is much less serious than a crash,
so leave it to be fixed separately.

This bug would be a potential security issue, except that plpython is
only available to superusers and the crash requires coding the function
in a way that didn't work before today's patches.

Add regression test cases covering all the supported methods of converting
composite results.

Back-patch to 9.1 where the faulty coding was introduced.
2015-08-21 12:21:37 -04:00
Tom Lane 2edb949115 Fix a few bogus statement type names in plpgsql error messages.
plpgsql's error location context messages ("PL/pgSQL function fn-name line
line-no at stmt-type") would misreport a CONTINUE statement as being an
EXIT, and misreport a MOVE statement as being a FETCH.  These are clear
bugs that have been there a long time, so back-patch to all supported
branches.

In addition, in 9.5 and HEAD, change the description of EXECUTE from
"EXECUTE statement" to just plain EXECUTE; there seems no good reason why
this statement type should be described differently from others that have
a well-defined head keyword.  And distinguish GET STACKED DIAGNOSTICS from
plain GET DIAGNOSTICS.  These are a bit more of a judgment call, and also
affect existing regression-test outputs, so I did not back-patch into
stable branches.

Pavel Stehule and Tom Lane
2015-08-18 19:22:37 -04:00
Tom Lane d3eaab3ef0 Fix performance bug from conflict between two previous improvements.
My expanded-objects patch (commit 1dc5ebc907) included code to make
plpgsql pass expanded-object variables as R/W pointers to certain functions
that are trusted for modifying such variables in-place.  However, that
optimization got broken by commit 6c82d8d1fd, which arranged to share
a single ParamListInfo across most expressions evaluated by a plpgsql
function.  We don't want a R/W pointer to be passed to other functions
just because we decided one function was safe!  Fortunately, the breakage
was in the other direction, of never passing a R/W pointer at all, because
we'd always have pre-initialized the shared array slot with a R/O pointer.
So it was still functionally correct, but we were back to O(N^2)
performance for repeated use of "a := a || x".  To fix, force an unshared
param array to be used when the R/W param optimization is active.

Commit 6c82d8d1fd is in HEAD only, so no need for a back-patch.
2015-08-17 19:39:35 -04:00
Tom Lane 83604cc423 Repair unsafe use of shared typecast-lookup table in plpgsql DO blocks.
DO blocks use private simple_eval_estates to avoid intra-transaction memory
leakage, cf commit c7b849a89.  I had forgotten about that while writing
commit 0fc94a5ba, but it means that expression execution trees created
within a DO block disappear immediately on exiting the DO block, and hence
can't safely be linked into plpgsql's session-wide cast hash table.
To fix, give a DO block a private cast hash table to go with its private
simple_eval_estate.  This is less efficient than one could wish, since
DO blocks can no longer share any cast lookup work with other plpgsql
execution, but it shouldn't be too bad; in any case it's no worse than
what happened in DO blocks before commit 0fc94a5ba.

Per bug #13571 from Feike Steenbergen.  Preliminary analysis by
Oleksandr Shulgin.
2015-08-15 12:00:48 -04:00
Andres Freund e95126cf04 Don't use function definitions looking like old-style ones.
This fixes a bunch of somewhat pedantic warnings with new
compilers. Since by far the majority of other functions definitions use
the (void) style it just seems to be consistent to do so as well in the
remaining few places.
2015-08-15 17:25:00 +02:00
Peter Eisentraut f16d52269a PL/Python: Make tests pass with Python 3.5
The error message wording for AttributeError has changed in Python 3.5.
For the plpython_error test, add a new expected file.  In the
plpython_subtransaction test, we didn't really care what the exception
is, only that it is something coming from Python.  So use a generic
exception instead, which has a message that doesn't vary across
versions.
2015-08-13 23:55:20 -04:00
Tom Lane 09cecdf285 Fix a number of places that produced XX000 errors in the regression tests.
It's against project policy to use elog() for user-facing errors, or to
omit an errcode() selection for errors that aren't supposed to be "can't
happen" cases.  Fix all the violations of this policy that result in
ERRCODE_INTERNAL_ERROR log entries during the standard regression tests,
as errors that can reliably be triggered from SQL surely should be
considered user-facing.

I also looked through all the files touched by this commit and fixed
other nearby problems of the same ilk.  I do not claim to have fixed
all violations of the policy, just the ones in these files.

In a few places I also changed existing ERRCODE choices that didn't
seem particularly appropriate; mainly replacing ERRCODE_SYNTAX_ERROR
by something more specific.

Back-patch to 9.5, but no further; changing ERRCODE assignments in
stable branches doesn't seem like a good idea.
2015-08-02 23:49:19 -04:00
Alvaro Herrera f8d67ca8d4 Fix (some of) pltcl memory usage
As reported by Bill Parker, PL/Tcl did not validate some malloc() calls
against NULL return.  Fix by using palloc() in a new long-lived memory
context instead.  This allows us to simplify error handling too, by
simply deleting the memory context instead of doing retail frees.

There's still a lot that could be done to improve PL/Tcl's memory
handling ...

This is pretty ancient, so backpatch all the way back.

Author: Michael Paquier and Álvaro Herrera
Discussion: https://www.postgresql.org/message-id/CAFrbyQwyLDYXfBOhPfoBGqnvuZO_Y90YgqFM11T2jvnxjLFmqw@mail.gmail.com
2015-07-20 14:10:07 +02:00
Tom Lane 0fc94a5bab Repair mishandling of cached cast-expression trees in plpgsql.
In commit 1345cc67bb, I introduced caching
of expressions representing type-cast operations into plpgsql.  However,
I supposed that I could cache both the expression trees and the evaluation
state trees derived from them for the life of the session.  This doesn't
work, because we execute the expressions in plpgsql's simple_eval_estate,
which has an ecxt_per_query_memory that is only transaction-lifespan.
Therefore we can end up putting pointers into the evaluation state tree
that point to transaction-lifespan memory; in particular this happens if
the cast expression calls a SQL-language function, as reported by Geoff
Winkless.

The minimum-risk fix seems to be to treat the state trees the same way
we do for "simple expression" trees in plpgsql, ie create them in the
simple_eval_estate's ecxt_per_query_memory, which means recreating them
once per transaction.

Since I had to introduce bookkeeping overhead for that anyway, I bought
back some of the added cost by sharing the read-only expression trees
across all functions in the session, instead of using a per-function
table as originally.  The simple-expression bookkeeping takes care of
the recursive-usage risk that I was concerned about avoiding before.

At some point we should take a harder look at how all this works,
and see if we can't reduce the amount of tree reinitialization needed.
But that won't happen for 9.5.
2015-07-17 15:53:09 -04:00
Tom Lane 6c82d8d1fd Further reduce overhead for passing plpgsql variables to the executor.
This builds on commit 21dcda2713 by keeping
a plpgsql function's shared ParamListInfo's entries for simple variables
(PLPGSQL_DTYPE_VARs) valid at all times.  That adds a few cycles to each
assignment to such variables, but saves significantly more cycles each time
they are used; so except in the pathological case of many dead stores, this
should always be a win.  Initial testing says it's good for about a 10%
speedup of simple calculations; more in large functions with many datums.

We can't use this method for row/record references unfortunately, so what
we do for those is reset those ParamListInfo slots after use; which we
can skip doing unless some of them were actually evaluated during the
previous evaluation call.  So this should frequently be a win as well,
while worst case is that it's similar cost to the previous approach.

Also, closer study suggests that the previous method of instantiating a
new ParamListInfo array per evaluation is actually probably optimal for
cursor-opening executor calls.  The reason is that whatever is visible in
the array is going to get copied into the cursor portal via copyParamList.
So if we used the function's main ParamListInfo for those calls, we'd end
up with all of its DTYPE_VAR vars getting copied, which might well include
large pass-by-reference values that the cursor actually has no need for.
To avoid a possible net degradation in cursor cases, go back to creating
and filling a private ParamListInfo in those cases (which therefore will be
exactly the same speed as before 21dcda2713).  We still get some benefit
out of this though, because this approach means that we only have to defend
against copyParamList's try-to-fetch-every-slot behavior in the case of an
unshared ParamListInfo; so plpgsql_param_fetch() can skip testing
expr->paramnos in the common case.

To ensure that the main ParamListInfo's image of a DTYPE_VAR datum is
always valid, all assignments to such variables are now funneled through
assign_simple_var().  But this makes for cleaner and shorter code anyway.
2015-07-05 12:57:17 -04:00
Peter Eisentraut c5e5d444de Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: fb7e72f46cfafa1b5bfe4564d9686d63a1e6383f
2015-06-28 23:56:55 -04:00
Peter Eisentraut 103382abf8 PL/Perl: Add alternative expected file for Perl 5.22 2015-06-21 10:37:24 -04:00
Tom Lane ae58f1430a Fix failure to cover scalar-vs-rowtype cases in exec_stmt_return().
In commit 9e3ad1aac5 I modified plpgsql
to use exec_stmt_return's simple-variables fast path in more cases.
However, I overlooked that there are really two different return
conventions in use here, depending on whether estate->retistuple is true,
and the existing fast-path code had only bothered to handle one of them.
So trying to return a scalar in a function returning composite, or vice
versa, could lead to unexpected error messages (typically "cache lookup
failed for type 0") or to a null-pointer-dereference crash.

In the DTYPE_VAR case, we can just throw error if retistuple is true,
corresponding to what happens in the general-expression code path that was
being used previously.  (Perhaps someday both of these code paths should
attempt a coercion, but today is not that day.)

In the REC and ROW cases, just hand the problem to exec_eval_datum()
when not retistuple.  Also clean up the ROW coding slightly so it looks
more like exec_eval_datum().

The previous commit also caused exec_stmt_return_next() to be used in
more cases, but that code seems to be OK as-is.

Per off-list report from Serge Rielau.  This bug is new in 9.5 so no need
to back-patch.
2015-06-12 13:44:06 -04:00
Bruce Momjian 807b9e0dff pgindent run for 9.5 2015-05-23 21:35:49 -04:00
Heikki Linnakangas 4fc72cc7bb Collection of typo fixes.
Use "a" and "an" correctly, mostly in comments. Two error messages were
also fixed (they were just elogs, so no translation work required). Two
function comments in pg_proc.h were also fixed. Etsuro Fujita reported one
of these, but I found a lot more with grep.

Also fix a few other typos spotted while grepping for the a/an typos.
For example, "consists out of ..." -> "consists of ...". Plus a "though"/
"through" mixup reported by Euler Taveira.

Many of these typos were in old code, which would be nice to backpatch to
make future backpatching easier. But much of the code was new, and I didn't
feel like crafting separate patches for each branch. So no backpatching.
2015-05-20 16:56:22 +03:00
Tom Lane 0c071936e9 Revert error-throwing wrappers for the printf family of functions.
This reverts commit 16304a0134, except
for its changes in src/port/snprintf.c; as well as commit
cac18a76bb which is no longer needed.

Fujii Masao reported that the previous commit caused failures in psql on
OS X, since if one exits the pager program early while viewing a query
result, psql sees an EPIPE error from fprintf --- and the wrapper function
thought that was reason to panic.  (It's a bit surprising that the same
does not happen on Linux.)  Further discussion among the security list
concluded that the risk of other such failures was far too great, and
that the one-size-fits-all approach to error handling embodied in the
previous patch is unlikely to be workable.

This leaves us again exposed to the possibility of the type of failure
envisioned in CVE-2015-3166.  However, that failure mode is strictly
hypothetical at this point: there is no concrete reason to believe that
an attacker could trigger information disclosure through the supposed
mechanism.  In the first place, the attack surface is fairly limited,
since so much of what the backend does with format strings goes through
stringinfo.c or psprintf(), and those already had adequate defenses.
In the second place, even granting that an unprivileged attacker could
control the occurrence of ENOMEM with some precision, it's a stretch to
believe that he could induce it just where the target buffer contains some
valuable information.  So we concluded that the risk of non-hypothetical
problems induced by the patch greatly outweighs the security risks.
We will therefore revert, and instead undertake closer analysis to
identify specific calls that may need hardening, rather than attempt a
universal solution.

We have kept the portion of the previous patch that improved snprintf.c's
handling of errors when it calls the platform's sprintf().  That seems to
be an unalloyed improvement.

Security: CVE-2015-3166
2015-05-19 18:19:38 -04:00
Noah Misch 16304a0134 Add error-throwing wrappers for the printf family of functions.
All known standard library implementations of these functions can fail
with ENOMEM.  A caller neglecting to check for failure would experience
missing output, information exposure, or a crash.  Check return values
within wrappers and code, currently just snprintf.c, that bypasses the
wrappers.  The wrappers do not return after an error, so their callers
need not check.  Back-patch to 9.0 (all supported versions).

Popular free software standard library implementations do take pains to
bypass malloc() in simple cases, but they risk ENOMEM for floating point
numbers, positional arguments, large field widths, and large precisions.
No specification demands such caution, so this commit regards every call
to a printf family function as a potential threat.

Injecting the wrappers implicitly is a compromise between patch scope
and design goals.  I would prefer to edit each call site to name a
wrapper explicitly.  libpq and the ECPG libraries would, ideally, convey
errors to the caller rather than abort().  All that would be painfully
invasive for a back-patched security fix, hence this compromise.

Security: CVE-2015-3166
2015-05-18 10:02:31 -04:00
Tom Lane 1dc5ebc907 Support "expanded" objects, particularly arrays, for better performance.
This patch introduces the ability for complex datatypes to have an
in-memory representation that is different from their on-disk format.
On-disk formats are typically optimized for minimal size, and in any case
they can't contain pointers, so they are often not well-suited for
computation.  Now a datatype can invent an "expanded" in-memory format
that is better suited for its operations, and then pass that around among
the C functions that operate on the datatype.  There are also provisions
(rudimentary as yet) to allow an expanded object to be modified in-place
under suitable conditions, so that operations like assignment to an element
of an array need not involve copying the entire array.

The initial application for this feature is arrays, but it is not hard
to foresee using it for other container types like JSON, XML and hstore.
I have hopes that it will be useful to PostGIS as well.

In this initial implementation, a few heuristics have been hard-wired
into plpgsql to improve performance for arrays that are stored in
plpgsql variables.  We would like to generalize those hacks so that
other datatypes can obtain similar improvements, but figuring out some
appropriate APIs is left as a task for future work.  (The heuristics
themselves are probably not optimal yet, either, as they sometimes
force expansion of arrays that would be better left alone.)

Preliminary performance testing shows impressive speed gains for plpgsql
functions that do element-by-element access or update of large arrays.
There are other cases that get a little slower, as a result of added array
format conversions; but we can hope to improve anything that's annoyingly
bad.  In any case most applications should see a net win.

Tom Lane, reviewed by Andres Freund
2015-05-14 12:08:49 -04:00
Peter Eisentraut dcf5e31908 PL/Python: Remove procedure cache invalidation
This was added to react to changes in the pg_transform catalog, but
building with CLOBBER_CACHE_ALWAYS showed that PL/Python was not
prepared for having its procedure cache cleared.  Since this is a
marginal use case, and we don't do this for other catalogs anyway, we
can postpone this to another day.
2015-05-12 22:52:18 -04:00
Andrew Dunstan b6b2149e48 Fix python_includespec on Windows at configure time
By converting to using forward slashes at configure time we avoid
having to repeat the logic anywhere that this is needed, such as
in transforms modules for plpython.
2015-05-03 08:17:04 -04:00
Peter Eisentraut 67df9782e9 Windows also needs an override of the shared libpython detection 2015-05-02 13:23:16 -04:00
Peter Eisentraut 0fd764647a Make hstore_plperl's build even more like plperl's
Combine the two places that set CPPFLAGS into one.  Also, some settings
should be restricted to Windows only.  More precisely, -Wno-comment is
a GCC-only option, but Windows in a makefile implies GCC at the moment.

Also, since -Wno-comment is more properly a preprocessor option, move it
to CPPFLAGS to simplify things a bit.
2015-05-01 22:16:58 -04:00
Peter Eisentraut d664a10f96 Move interpreter shared library detection to configure
For building PL/Perl, PL/Python, and PL/Tcl, we need a shared library of
libperl, libpython, and libtcl, respectively.  Previously, this was
checked in the makefiles, skipping the PL build with a warning if no
shared library was available.  Now this is checked in configure, with an
error if no shared library is available.

The previous situation arose because in the olden days, the configure
options --with-perl, --with-python, and --with-tcl controlled whether
frontend interfaces for those languages would be built.  The procedural
languages were added later, and shared libraries were often not
available in the beginning.  So it was decided skip the builds of the
procedural languages in those cases.  The frontend interfaces have since
been removed from the tree, and shared libraries are now available most
of the time, so that setup makes much less sense now.

Also, the new setup allows contrib modules and pgxs users to rely on the
respective PLs being available based on configure flags.
2015-05-01 21:38:21 -04:00
Peter Eisentraut dbf2ec1a1c Fix parallel make risk with new check temp-install setup
The "check" target no longer needs to depend on "all", because it now
runs "install" directly, which in turn depends on "all".  Doing both
will cause problems with parallel make, because two builds will run next
to each other.

Also remove the redirection of the temp-install output into a log file.
This was appropriate when this was done from within pg_regress, but now
it's just a regular make run, and especially with the above changes this
will now take the place of running the "all" target before the test
suites.

problem report by Jeff Janes, patch in part by Michael Paquier
2015-04-29 20:34:22 -04:00
Peter Eisentraut cac7658205 Add transforms feature
This provides a mechanism for specifying conversions between SQL data
types and procedural languages.  As examples, there are transforms
for hstore and ltree for PL/Perl and PL/Python.

reviews by Pavel Stěhule and Andres Freund
2015-04-26 10:33:14 -04:00
Peter Eisentraut dcae5facca Improve speed of make check-world
Before, make check-world would create a new temporary installation for
each test suite, which is slow and wasteful.  Instead, we now create one
test installation that is used by all test suites that are part of a
make run.

The management of the temporary installation is removed from pg_regress
and handled in the makefiles.  This allows for better control, and
unifies the code with that of test suites not run through pg_regress.

review and msvc support by Michael Paquier <michael.paquier@gmail.com>

more review by Fabien Coelho <coelho@cri.ensmp.fr>
2015-04-23 08:59:52 -04:00
Tom Lane 785941cdc3 Tweak __attribute__-wrapping macros for better pgindent results.
This improves on commit bbfd7edae5 by
making two simple changes:

* pg_attribute_noreturn now takes parentheses, ie pg_attribute_noreturn().
Likewise pg_attribute_unused(), pg_attribute_packed().  This reduces
pgindent's tendency to misformat declarations involving them.

* attributes are now always attached to function declarations, not
definitions.  Previously some places were taking creative shortcuts,
which were not merely candidates for bad misformatting by pgindent
but often were outright wrong anyway.  (It does little good to put a
noreturn annotation where callers can't see it.)  In any case, if
we would like to believe that these macros can be used with non-gcc
compilers, we should avoid gratuitous variance in usage patterns.

I also went through and manually improved the formatting of a lot of
declarations, and got rid of excessively repetitive (and now obsolete
anyway) comments informing the reader what pg_attribute_printf is for.
2015-03-26 14:03:25 -04:00
Tom Lane a4847fc3ef Add an ASSERT statement in plpgsql.
This is meant to make it easier to insert simple debugging cross-checks
in plpgsql functions.

Pavel Stehule, reviewed by Jim Nasby
2015-03-25 19:05:32 -04:00
Alvaro Herrera 13dbc7a824 array_offset() and array_offsets()
These functions return the offset position or positions of a value in an
array.

Author: Pavel Stěhule
Reviewed by: Jim Nasby
2015-03-18 16:01:34 -03:00
Tom Lane 5ff683962e Remove obsolete comment.
Obsoleted by commit 21dcda2713, but I missed
seeing the cross-reference in the comments for exec_eval_integer().

Also improve the cross-reference in the comments for exec_eval_cleanup().
2015-03-14 17:07:11 -04:00
Peter Eisentraut ff2faeec5c PL/Python: Fix regression tests for Python 3 2015-03-11 18:30:56 -04:00
Peter Eisentraut 1ce7a57ca6 PL/Python: Avoid lossiness in float conversion
PL/Python uses str() to convert Python values back to PostgreSQL, but
str() is lossy for float values, so use repr() instead in that case.

Author: Marko Kreen <markokr@gmail.com>
2015-03-11 15:46:06 -04:00
Tom Lane c6b3c939b7 Make operator precedence follow the SQL standard more closely.
While the SQL standard is pretty vague on the overall topic of operator
precedence (because it never presents a unified BNF for all expressions),
it does seem reasonable to conclude from the spec for <boolean value
expression> that OR has the lowest precedence, then AND, then NOT, then IS
tests, then the six standard comparison operators, then everything else
(since any non-boolean operator in a WHERE clause would need to be an
argument of one of these).

We were only sort of on board with that: most notably, while "<" ">" and
"=" had properly low precedence, "<=" ">=" and "<>" were treated as generic
operators and so had significantly higher precedence.  And "IS" tests were
even higher precedence than those, which is very clearly wrong per spec.

Another problem was that "foo NOT SOMETHING bar" constructs, such as
"x NOT LIKE y", were treated inconsistently because of a bison
implementation artifact: they had the documented precedence with respect
to operators to their right, but behaved like NOT (i.e., very low priority)
with respect to operators to their left.

Fixing the precedence issues is just a small matter of rearranging the
precedence declarations in gram.y, except for the NOT problem, which
requires adding an additional lookahead case in base_yylex() so that we
can attach a different token precedence to NOT LIKE and allied two-word
operators.

The bulk of this patch is not the bug fix per se, but adding logic to
parse_expr.c to allow giving warnings if an expression has changed meaning
because of these precedence changes.  These warnings are off by default
and are enabled by the new GUC operator_precedence_warning.  It's believed
that very few applications will be affected by these changes, but it was
agreed that a warning mechanism is essential to help debug any that are.
2015-03-11 13:22:52 -04:00
Tom Lane 21dcda2713 Allocate ParamListInfo once per plpgsql function, not once per expression.
setup_param_list() was allocating a fresh ParamListInfo for each query or
expression evaluation requested by a plpgsql function.  There was probably
once good reason to do it like that, but for a long time we've had a
convention that there's a one-to-one mapping between the function's
PLpgSQL_datum array and the ParamListInfo slots, which means that a single
ParamListInfo can serve all the function's evaluation requests: the data
that would need to be passed is the same anyway.

In this patch, we retain the pattern of zeroing out the ParamListInfo
contents during each setup_param_list() call, because some of the slots may
be stale and we don't know exactly which ones.  So this patch only saves a
palloc/pfree per evaluation cycle and nothing more; still, that seems to be
good for a couple percent overall speedup on simple-arithmetic type
statements.  In future, though, we might be able to improve matters still
more by managing the param array contents more carefully.

Also, unify the former use of estate->cur_expr with that of
paramLI->parserSetupArg; they both were used to point to the active
expression, so we can combine the variables into just one.
2015-03-11 12:40:43 -04:00
Andres Freund bbfd7edae5 Add macros wrapping all usage of gcc's __attribute__.
Until now __attribute__() was defined to be empty for all compilers but
gcc. That's problematic because it prevents using it in other compilers;
which is necessary e.g. for atomics portability.  It's also just
generally dubious to do so in a header as widely included as c.h.

Instead add pg_attribute_format_arg, pg_attribute_printf,
pg_attribute_noreturn macros which are implemented in the compilers that
understand them. Also add pg_attribute_noreturn and pg_attribute_packed,
but don't provide fallbacks, since they can affect functionality.

This means that external code that, possibly unwittingly, relied on
__attribute__ defined to be empty on !gcc compilers may now run into
warnings or errors on those compilers. But there shouldn't be many
occurances of that and it's hard to work around...

Discussion: 54B58BA3.8040302@ohmu.fi
Author: Oskari Saarenmaa, with some minor changes by me.
2015-03-11 14:30:01 +01:00
Tom Lane 2fbb286647 Clean up the mess from => patch.
Commit 865f14a2d3 was quite a few bricks
shy of a load: psql, ecpg, and plpgsql were all left out-of-step with
the core lexer.  Of these only the last was likely to be a fatal
problem; but still, a minimal amount of grepping, or even just reading
the comments adjacent to the places that were changed, would have found
the other places that needed to be changed.
2015-03-10 11:48:38 -04:00
Tom Lane 7f3014dce5 Change plpgsql's cast cache to consider source typmod as significant.
I had thought that there was no need to maintain separate cache entries
for different source typmods, but further experimentation shows that there
is an advantage to doing so in some cases.  In particular, if a domain has
a typmod (say, "CREATE DOMAIN d AS numeric(20,0)"), failing to notice the
source typmod leads to applying a length-coercion step even when the
source has the correct typmod.
2015-03-04 20:23:13 -05:00
Tom Lane 45f2c2fc4e Need to special-case RECORD as well as UNKNOWN in plpgsql's casting logic.
This is because can_coerce_type thinks that RECORD can be cast to any
composite type, but coerce_record_to_complex only works for inputs that are
RowExprs or whole-row Vars, so we get a hard failure on a CaseTestExpr.

Perhaps these corner cases ought to be fixed so that coerce_to_target_type
actually returns NULL as per its specification, rather than failing ...
but for the moment an extra check here is the path of least resistance.
2015-03-04 19:10:57 -05:00
Tom Lane 1345cc67bb Use standard casting mechanism to convert types in plpgsql, when possible.
plpgsql's historical method for converting datatypes during assignments was
to apply the source type's output function and then the destination type's
input function.  Aside from being miserably inefficient in most cases, this
method failed outright in many cases where a user might expect it to work;
an example is that "declare x int; ... x := 3.9;" would fail, not round the
value to 4.

Instead, let's convert by applying the appropriate assignment cast whenever
there is one.  To avoid breaking compatibility unnecessarily, fall back to
the I/O conversion method if there is no assignment cast.

So far as I can tell, there is just one case where this method produces a
different result than the old code in a case where the old code would not
have thrown an error.  That is assignment of a boolean value to a string
variable (type text, varchar, or bpchar); the old way gave boolean's output
representation, ie 't'/'f', while the new way follows the behavior of the
bool-to-text cast and so gives 'true' or 'false'.  This will need to be
called out as an incompatibility in the 9.5 release notes.

Aside from handling many conversion cases more sanely, this method is
often significantly faster than the old way.  In part that's because
of more effective caching of the conversion info.
2015-03-04 11:04:30 -05:00
Tom Lane e524cbdc45 Track typmods in plpgsql expression evaluation and assignment.
The main value of this change is to avoid expensive I/O conversions when
assigning to a variable that has a typmod specification, if the value
to be assigned is already known to have the right typmod.  This is
particularly valuable for arrays with typmod specifications; formerly,
in an assignment to an array element the entire array would invariably
get put through double I/O conversion to check the typmod, to absolutely
no purpose since we'd already properly coerced the new element value.

Extracted from my "expanded arrays" patch; this seems worth committing
separately, whatever becomes of that patch, since it's really an
independent issue.

As long as we're changing the function signatures, take the opportunity
to rationalize the argument lists of exec_assign_value, exec_cast_value,
and exec_simple_cast_value; that is, put the arguments into a saner order,
and get rid of the bizarre choice to pass exec_assign_value's isNull flag
by reference.
2015-02-28 14:34:35 -05:00
Noah Misch f5ef00aed4 Free SQLSTATE and SQLERRM no earlier than other PL/pgSQL variables.
"RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed
memory.  Back-patch to 9.0 (all supported versions).  Little code ran
between the premature free and the read, so non-assert builds are
unlikely to witness user-visible consequences.
2015-02-25 23:48:28 -05:00
Jeff Davis b419865a81 In array_agg(), don't create a new context for every group.
Previously, each new array created a new memory context that started
out at 8kB. This is incredibly wasteful when there are lots of small
groups of just a few elements each.

Change initArrayResult() and friends to accept a "subcontext" argument
to indicate whether the caller wants the ArrayBuildState allocated in
a new subcontext or not. If not, it can no longer be released
separately from the rest of the memory context.

Fixes bug report by Frank van Vugt on 2013-10-19.

Tomas Vondra. Reviewed by Ali Akbar, Tom Lane, and me.
2015-02-21 17:24:48 -08:00
Tom Lane f2874feb7c Some more FLEXIBLE_ARRAY_MEMBER fixes. 2015-02-21 01:46:43 -05:00
Tom Lane 9e3ad1aac5 Use fast path in plpgsql's RETURN/RETURN NEXT in more cases.
exec_stmt_return() and exec_stmt_return_next() have fast-path code for
handling a simple variable reference (i.e. "return var") without going
through the full expression evaluation machinery.  For some reason,
pl_gram.y was under the impression that this fast path only applied for
record/row variables; but in reality code for handling regular scalar
variables has been there all along.  Adjusting the logic to allow that
code to be used actually results in a net savings of code in pl_gram.y
(by eliminating some redundancy), and it buys a measurable though not
very impressive amount of speedup.

Noted while fooling with my expanded-array patch, wherein this makes a much
bigger difference because it enables returning an expanded array variable
without an extra flattening step.  But AFAICS this is a win regardless,
so commit it separately.
2015-02-16 15:28:48 -05:00
Tom Lane e983c4d1aa Rationalize the APIs of array element/slice access functions.
The four functions array_ref, array_set, array_get_slice, array_set_slice
have traditionally declared their array inputs and results as being of type
"ArrayType *".  This is a lie, and has been since Berkeley days, because
they actually also support "fixed-length array" types such as "name" and
"point"; not to mention that the inputs could be toasted.  These values
should be declared Datum instead to avoid confusion.  The current coding
already risks possible misoptimization by compilers, and it'll get worse
when "expanded" array representations become a valid alternative.

However, there's a fair amount of code using array_ref and array_set with
arrays that *are* known to be ArrayType structures, and there might be more
such places in third-party code.  Rather than cluttering those call sites
with PointerGetDatum/DatumGetArrayTypeP cruft, what I did was to rename the
existing functions to array_get_element/array_set_element, fix their
signatures, then reincarnate array_ref/array_set as backwards compatibility
wrappers.

array_get_slice/array_set_slice have no such constituency in the core code,
and probably not in third-party code either, so I just changed their APIs.
2015-02-16 12:23:58 -05:00
Peter Eisentraut f8948616c9 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 19c72ea8d856d7b1d4f5d759a766c8206bf9ce53
2015-02-01 23:23:40 -05:00
Tom Lane 599d00aa68 Fix volatile-safety issue in pltcl_SPI_execute_plan().
The "callargs" variable is modified within PG_TRY and then referenced
within PG_CATCH, which is exactly the coding pattern we've now found
to be unsafe.  Marking "callargs" volatile would be problematic because
it is passed by reference to some Tcl functions, so fix the problem
by not modifying it within PG_TRY.  We can just postpone the free()
till we exit the PG_TRY construct, as is already done elsewhere in this
same file.

Also, fix failure to free(callargs) when exiting on too-many-arguments
error.  This is only a minor memory leak, but a leak nonetheless.

In passing, remove some unnecessary "volatile" markings in the same
function.  Those doubtless are there because gcc 2.95.3 whinged about
them, but we now know that its algorithm for complaining is many bricks
shy of a load.

This is certainly a live bug with compilers that optimize similarly
to current gcc, so back-patch to all active branches.
2015-01-26 12:18:25 -05:00
Tom Lane 7391e2513f Fix some functions that were declared static then defined not-static.
Per testing with a compiler that whines about this.
2015-01-12 16:08:43 -05:00
Alvaro Herrera 91539c5698 Fix thinko in plpython error message 2015-01-06 15:16:29 -03:00
Bruce Momjian 4baaf863ec Update copyright for 2015
Backpatch certain files through 9.0
2015-01-06 11:43:47 -05:00
Tom Lane 4a14f13a0a Improve hash_create's API for selecting simple-binary-key hash functions.
Previously, if you wanted anything besides C-string hash keys, you had to
specify a custom hashing function to hash_create().  Nearly all such
callers were specifying tag_hash or oid_hash; which is tedious, and rather
error-prone, since a caller could easily miss the opportunity to optimize
by using hash_uint32 when appropriate.  Replace this with a design whereby
callers using simple binary-data keys just specify HASH_BLOBS and don't
need to mess with specific support functions.  hash_create() itself will
take care of optimizing when the key size is four bytes.

This nets out saving a few hundred bytes of code space, and offers
a measurable performance improvement in tidbitmap.c (which was not
exploiting the opportunity to use hash_uint32 for its 4-byte keys).
There might be some wins elsewhere too, I didn't analyze closely.

In future we could look into offering a similar optimized hashing function
for 8-byte keys.  Under this design that could be done in a centralized
and machine-independent fashion, whereas getting it right for keys of
platform-dependent sizes would've been notationally painful before.

For the moment, the old way still works fine, so as not to break source
code compatibility for loadable modules.  Eventually we might want to
remove tag_hash and friends from the exported API altogether, since there's
no real need for them to be explicitly referenced from outside dynahash.c.

Teodor Sigaev and Tom Lane
2014-12-18 13:36:36 -05:00
Peter Eisentraut ee3bec5e22 Translation updates 2014-12-15 00:25:35 -05:00
Tom Lane 475aedd1ef Improve error messages for malformed array input strings.
Make the error messages issued by array_in() uniformly follow the style
	ERROR: malformed array literal: "actual input string"
	DETAIL: specific complaint here
and rewrite many of the specific complaints to be clearer.

The immediate motivation for doing this is a complaint from Josh Berkus
that json_to_record() produced an unintelligible error message when
dealing with an array item, because it tries to feed the JSON-format
array value to array_in().  Really it ought to be smart enough to
perform JSON-to-Postgres array conversion, but that's a future feature
not a bug fix.  In the meantime, this change is something we agreed
we could back-patch into 9.4, and it should help de-confuse things a bit.
2014-12-02 18:23:27 -05:00
Tom Lane f4e031c662 Add bms_next_member(), and use it where appropriate.
This patch adds a way of iterating through the members of a bitmapset
nondestructively, unlike the old way with bms_first_member().  While
bms_next_member() is very slightly slower than bms_first_member()
(at least for typical-size bitmapsets), eliminating the need to palloc
and pfree a temporary copy of the target bitmapset is a significant win.
So this method should be preferred in all cases where a temporary copy
would be necessary.

Tom Lane, with suggestions from Dean Rasheed and David Rowley
2014-11-28 13:37:25 -05:00
Tom Lane bb1b8f694a De-reserve most statement-introducing keywords in plpgsql.
Add a bit of context sensitivity to plpgsql_yylex() so that it can
recognize when the word it is looking at is the first word of a new
statement, and if so whether it is the target of an assignment statement.
When we are at start of statement and it's not an assignment, we can
prefer recognizing unreserved keywords over recognizing variable names,
thereby allowing most statements' initial keywords to be demoted from
reserved to unreserved status.  This is rather useful already (there are
15 such words that get demoted here), and what's more to the point is
that future patches proposing to add new plpgsql statements can avoid
objections about having to add new reserved words.

The keywords BEGIN, DECLARE, FOR, FOREACH, LOOP, WHILE need to remain
reserved because they can be preceded by block labels, and the logic
added here doesn't understand about block labels.  In principle we
could probably fix that, but it would take more than one token of
lookback and the benefit doesn't seem worth extra complexity.

Also note I didn't de-reserve EXECUTE, because it is used in more places
than just statement start.  It's possible it could be de-reserved with
more work, but that would be an independent fix.

In passing, also de-reserve COLLATE and DEFAULT, which shouldn't have
been reserved in the first place since they only need to be recognized
within DECLARE sections.
2014-11-25 15:02:09 -05:00
Tom Lane bac27394a1 Support arrays as input to array_agg() and ARRAY(SELECT ...).
These cases formerly failed with errors about "could not find array type
for data type".  Now they yield arrays of the same element type and one
higher dimension.

The implementation involves creating functions with API similar to the
existing accumArrayResult() family.  I (tgl) also extended the base family
by adding an initArrayResult() function, which allows callers to avoid
special-casing the zero-inputs case if they just want an empty array as
result.  (Not all do, so the previous calling convention remains valid.)
This allowed simplifying some existing code in xml.c and plperl.c.

Ali Akbar, reviewed by Pavel Stehule, significantly modified by me
2014-11-25 12:21:28 -05:00
Tom Lane e2dc3f5772 Get rid of redundant production in plpgsql grammar.
There may once have been a reason for the intermediate proc_stmts
production in the plpgsql grammar, but it isn't doing anything useful
anymore, so let's collapse it into proc_sect.  Saves some code and
probably a small number of nanoseconds per statement list.

In passing, correctly alphabetize keyword lists to match pl_scanner.c;
note that for "rowtype" vs "row_count", pl_scanner.c must sort on the
basis of the lower-case spelling.

Noted while fooling with a patch to de-reserve more plpgsql keywords.
2014-11-23 15:31:36 -05:00
Peter Eisentraut 7466a1b75f Translation updates 2014-11-16 21:32:51 -05:00
Noah Misch 00c07e497f Re-remove dependency on the DLL of pythonxx.def file.
The reasons behind commit 0d147e43ad still
stand, so this reverts the non-cosmetic portion of commit
a7983e989d.  Back-patch to 9.4, where the
latter commit first appeared.
2014-11-02 21:43:30 -05:00