Commit Graph

18 Commits

Author SHA1 Message Date
Peter Eisentraut f73999262e tablefunc: Reject negative number of tuples passed to normal_rand()
The function converted the first argument i.e. the number of tuples to
return into an unsigned integer which turns out to be huge number when
a negative value is passed.  This causes the function to take much
longer time to execute.  Instead, reject a negative value.

(If someone really wants to generate many more result rows, they
should consider adding a bigint or numeric variant.)

While at it, improve SQL test to test the number of tuples returned by
this function.

Author: Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/CAG-ACPW3PUUmSnM6cLa9Rw4BEC5cEMKjX8Gogc8gvQcT3cYA1A@mail.gmail.com
2020-11-25 15:30:18 +01:00
Tom Lane 37507962c3 Handle unexpected query results, especially NULLs, safely in connectby().
connectby() didn't adequately check that the constructed SQL query returns
what it's expected to; in fact, since commit 08c33c426b it wasn't
checking that at all.  This could result in a null-pointer-dereference
crash if the constructed query returns only one column instead of the
expected two.  Less excitingly, it could also result in surprising data
conversion failures if the constructed query returned values that were
not I/O-conversion-compatible with the types specified by the query
calling connectby().

In all branches, insist that the query return at least two columns;
this seems like a minimal sanity check that can't break any reasonable
use-cases.

In HEAD, insist that the constructed query return the types specified by
the outer query, including checking for typmod incompatibility, which the
code never did even before it got broken.  This is to hide the fact that
the implementation does a conversion to text and back; someday we might
want to improve that.

In back branches, leave that alone, since adding a type check in a minor
release is more likely to break things than make people happy.  Type
inconsistencies will continue to work so long as the actual type and
declared type are I/O representation compatible, and otherwise will fail
the same way they used to.

Also, in all branches, be on guard for NULL results from the constructed
query, which formerly would cause null-pointer dereference crashes.
We now print the row with the NULL but don't recurse down from it.

In passing, get rid of the rather pointless idea that
build_tuplestore_recursively() should return the same tuplestore that's
passed to it.

Michael Paquier, adjusted somewhat by me
2015-01-29 20:18:33 -05:00
Tom Lane 629b3af27d Convert contrib modules to use the extension facility.
This isn't fully tested as yet, in particular I'm not sure that the
"foo--unpackaged--1.0.sql" scripts are OK.  But it's time to get some
buildfarm cycles on it.

sepgsql is not converted to an extension, mainly because it seems to
require a very nonstandard installation process.

Dimitri Fontaine and Tom Lane
2011-02-13 22:54:49 -05:00
Peter Eisentraut 3f11971916 Remove extra newlines at end and beginning of files, add missing newlines
at end of files.
2010-08-19 05:57:36 +00:00
Tom Lane 30e2c42e00 Fix a few contrib regression test scripts that hadn't gotten the word
about best practice for including the module creation scripts: to wit
that you should suppress NOTICE messages.  This avoids creating
regression failures by adding or removing comment lines in the module
scripts.
2007-11-13 06:29:04 +00:00
Joe Conway 01496439e9 Have crosstab variants treat NULL rowid as a category in its own right,
per suggestion from Tom Lane. This fixes crash-bug reported by Stefan
Schwarzer.
2007-11-10 05:00:41 +00:00
Peter Eisentraut 7f4f42fa10 Clean up CREATE FUNCTION syntax usage in contrib and elsewhere, in
particular get rid of single quotes around language names and old WITH ()
construct.
2006-02-27 16:09:50 +00:00
Tom Lane 978129f28e Document get_call_result_type() and friends; mark TypeGetTupleDesc()
and RelationNameGetTupleDesc() as deprecated; remove uses of the
latter in the contrib library.  Along the way, clean up crosstab()
code and documentation a little.
2005-05-30 23:09:07 +00:00
Joe Conway bc8a1fc282 Hashed crosstab was dying with an SPI_finish error when the source SQL
produced no rows. Now it returns 0 rows instead. Adjusted regression
test for this case.
2004-08-11 00:49:35 +00:00
Tom Lane c472b8366f With Joe Conway's concurrence, remove srandom() call from normal_rand().
This was the last piece of code that took it upon itself to reset the
random number sequence --- now we only have srandom() in postmaster start,
backend start, and explicit setseed() operations.
2003-09-13 21:44:50 +00:00
Bruce Momjian a265b7f70a > Am Son, 2003-06-22 um 02.09 schrieb Joe Conway:
>>Sounds like all that's needed for your case. But to be complete, in
>>addition to changing tablefunc.c we'd have to:
>>1) come up with a new function call signature that makes sense and does
>>not cause backward compatibility problems for other people
>>2) make needed changes to tablefunc.sql.in
>>3) adjust the README.tablefunc appropriately
>>4) adjust the regression test for new functionality
>>5) be sure we don't break any of the old cases
>>
>>If you want to submit a complete patch, it would be gratefully accepted
>>-- for review at least ;-)
>
> Here's the patch, at least for steps 1-3

Nabil Sayegh
Joe Conway
2003-07-27 03:51:59 +00:00
Bruce Momjian 64d0b8b05f Attached is an update to contrib/tablefunc. It implements a new hashed
version of crosstab. This fixes a major deficiency in real-world use of
the original version. Easiest to undestand with an illustration:

Data:
-------------------------------------------------------------------
select * from cth;
  id | rowid |        rowdt        |   attribute    |      val
----+-------+---------------------+----------------+---------------
   1 | test1 | 2003-03-01 00:00:00 | temperature    | 42
   2 | test1 | 2003-03-01 00:00:00 | test_result    | PASS
   3 | test1 | 2003-03-01 00:00:00 | volts          | 2.6987
   4 | test2 | 2003-03-02 00:00:00 | temperature    | 53
   5 | test2 | 2003-03-02 00:00:00 | test_result    | FAIL
   6 | test2 | 2003-03-02 00:00:00 | test_startdate | 01 March 2003
   7 | test2 | 2003-03-02 00:00:00 | volts          | 3.1234
(7 rows)

Original crosstab:
-------------------------------------------------------------------
SELECT * FROM crosstab(
   'SELECT rowid, attribute, val FROM cth ORDER BY 1,2',4)
AS c(rowid text, temperature text, test_result text, test_startdate
text, volts text);
  rowid | temperature | test_result | test_startdate | volts
-------+-------------+-------------+----------------+--------
  test1 | 42          | PASS        | 2.6987         |
  test2 | 53          | FAIL        | 01 March 2003  | 3.1234
(2 rows)

Hashed crosstab:
-------------------------------------------------------------------
SELECT * FROM crosstab(
   'SELECT rowid, attribute, val FROM cth ORDER BY 1',
   'SELECT DISTINCT attribute FROM cth ORDER BY 1')
AS c(rowid text, temperature int4, test_result text, test_startdate
timestamp, volts float8);
  rowid | temperature | test_result |   test_startdate    | volts
-------+-------------+-------------+---------------------+--------
  test1 |          42 | PASS        |                     | 2.6987
  test2 |          53 | FAIL        | 2003-03-01 00:00:00 | 3.1234
(2 rows)

Notice that the original crosstab slides data over to the left in the
result tuple when it encounters missing data. In order to work around
this you have to be make your source sql do all sorts of contortions
(cartesian join of distinct rowid with distinct attribute; left join
that back to the real source data). The new version avoids this by
building a hash table using a second distinct attribute query.

The new version also allows for "extra" columns (see the README) and
allows the result columns to be coerced into differing datatypes if they
are suitable (as shown above).

In testing a "real-world" data set (69 distinct rowid's, 27 distinct
categories/attributes, multiple missing data points) I saw about a
5-fold improvement in execution time (from about 2200 ms old, to 440 ms
new).

I left the original version intact because: 1) BC, 2) it is probably
slightly faster if you know that you have no missing attributes.

README and regression test adjustments included. If there are no
objections, please apply.

Joe Conway
2003-03-20 06:46:30 +00:00
Tom Lane 1f1c332381 Remove inappropriate double-quoting in connectby() code; adjust
regression test to avoid using VALUE as a name.  From Joe Conway.
2002-11-23 01:54:09 +00:00
Bruce Momjian e5cf1a8a26 SET autocommit no longer needed in /contrib because pg_regress.sh does
it automatically now on regression session startup.
2002-10-21 01:42:14 +00:00
Bruce Momjian aa4c702eac Update /contrib for "autocommit TO 'on'".
Create objects in public schema.

Make spacing/capitalization consistent.

Remove transaction block use for object creation.

Remove unneeded function GRANTs.
2002-10-18 18:41:22 +00:00
Bruce Momjian a62873d279 The attached adds a bit to the contrib/tablefunc regression test for
behavior of connectby() in the presence of infinite recursion. Please
apply this one in addition to the one sent earlier.

Joe Conway
2002-10-03 17:15:36 +00:00
Tom Lane bd04184b11 Attached is a patch to fix some recently raised issues that exist in
contrib/tablefunc. Specifically it replaces the use of VIEWs (for needed
composite type creation) with use of CREATE TYPE. It also performs GRANT
EXECUTE ON FUNCTION foo() TO PUBLIC for all of the created functions. There
was also a cosmetic change to two regression files.

Joe Conway
2002-09-14 19:53:59 +00:00
Bruce Momjian 6fff9a7475 The attached removes the current non-standard file
"contrib/tablefunc/tablefunc-test.sql", and adds a standard regression
test suite to contrib/tablefunc.

Joe Conway
2002-09-12 00:14:40 +00:00