Commit Graph

259 Commits

Author SHA1 Message Date
Joe Conway c444868389 Protect dblink from invalid options when using postgres_fdw server
When dblink uses a postgres_fdw server name for its connection, it
is possible for the connection to have options that are invalid
with dblink (e.g. "updatable"). The recommended way to avoid this
problem is to use dblink_fdw servers instead. However there are use
cases for using postgres_fdw, and possibly other FDWs, for dblink
connection options, therefore protect against trying to use any
options that do not apply by using is_valid_dblink_option() when
building the connection string from the options.

Back-patch to 9.3. Although 9.2 supports FDWs for connection info,
is_valid_dblink_option() did not yet exist, and neither did
postgres_fdw, at least in the postgres source tree. Given the lack
of previous complaints, fixing that seems too invasive/not worth it.

Author: Corey Huinker
Reviewed-By: Joe Conway
Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
2016-12-22 09:20:35 -08:00
Joe Conway ea0aa9698c Improve dblink error message when remote does not provide it
When dblink or postgres_fdw detects an error on the remote side of the
connection, it will try to construct a local error message as best it
can using libpq's PQresultErrorField(). When no primary message is
available, it was bailing out with an unhelpful "unknown error". Make
that message better and more style guide compliant. Per discussion
on hackers.

Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3.

Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
2016-12-21 15:51:31 -08:00
Peter Eisentraut 0665023b44 Remove unnecessary prototypes
Prototypes for functions implementing V1-callable functions are no
longer necessary.

Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
2016-09-30 14:04:16 -04:00
Tom Lane ea268cdc9a Add macros to make AllocSetContextCreate() calls simpler and safer.
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters.  While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea.  Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.

While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.

In passing, change TopMemoryContext to use the default allocation
parameters.  Formerly it could only be extended 8K at a time.  That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there.  There seems no good reason
not to let it use growing blocks like most other contexts.

Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain.  The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.

Discussion: <21072.1472321324@sss.pgh.pa.us>
2016-08-27 17:50:38 -04:00
Tom Lane 18555b1323 Establish conventions about global object names used in regression tests.
To ensure that "make installcheck" can be used safely against an existing
installation, we need to be careful about what global object names
(database, role, and tablespace names) we use; otherwise we might
accidentally clobber important objects.  There's been a weak consensus that
test databases should have names including "regression", and that test role
names should start with "regress_", but we didn't have any particular rule
about tablespace names; and neither of the other rules was followed with
any consistency either.

This commit moves us a long way towards having a hard-and-fast rule that
regression test databases must have names including "regression", and that
test role and tablespace names must start with "regress_".  It's not
completely there because I did not touch some test cases in rolenames.sql
that test creation of special role names like "session_user".  That will
require some rethinking of exactly what we want to test, whereas the intent
of this patch is just to hit all the cases in which the needed renamings
are cosmetic.

There is no enforcement mechanism in this patch either, but if we don't
add one we can expect that the tests will soon be violating the convention
again.  Again, that's not such a cosmetic change and it will require
discussion.  (But I did use a quick-hack enforcement patch to find these
cases.)

Discussion: <16638.1468620817@sss.pgh.pa.us>
2016-07-17 18:42:43 -04:00
Tom Lane 7e81a18d49 Fix parallel-safety markings for contrib/dblink.
As shown by buildfarm reports, dblink_build_sql_insert and
dblink_build_sql_update are *not* parallel safe, because they
may attempt to access temporary tables of the local session.

Although dblink_build_sql_delete doesn't actually touch the
contents of the referenced table, it seems consistent and prudent
to mark it PARALLEL RESTRICTED too.
2016-06-17 23:08:21 -04:00
Robert Haas 20eb2731b7 Update dblink extension for parallel query.
Almost all functions provided by this extension are PARALLEL
RESTRICTED.  Mostly, that's because the leader's TCP connections won't
be shared with the workers, but in some cases like dblink_get_pkey
it's because they obtain locks which might be released early if taken
within a parallel worker.  dblink_fdw_validator probably can't be used
in a query anyway, but there would be no problem from the point of
view of parallel query if it were, so it's PARALLEL SAFE.

Andreas Karlsson
2016-06-17 15:18:44 -04:00
Teodor Sigaev 8b99edefca Revert CREATE INDEX ... INCLUDING ...
It's not ready yet, revert two commits
690c543550 - unstable test output
386e3d7609 - patch itself
2016-04-08 21:52:13 +03:00
Teodor Sigaev 386e3d7609 CREATE INDEX ... INCLUDING (column[, ...])
Now indexes (but only B-tree for now) can contain "extra" column(s) which
doesn't participate in index structure, they are just stored in leaf
tuples. It allows to use index only scan by using single index instead
of two or more indexes.

Author: Anastasia Lubennikova with minor editorializing by me
Reviewers: David Rowley, Peter Geoghegan, Jeff Janes
2016-04-08 19:45:59 +03:00
Bruce Momjian ee94300446 Update copyright for 2016
Backpatch certain files through 9.1
2016-01-02 13:33:40 -05:00
Tom Lane 0426f349ef Rearrange the handling of error context reports.
Remove the code in plpgsql that suppressed the innermost line of CONTEXT
for messages emitted by RAISE commands.  That was never more than a quick
backwards-compatibility hack, and it's pretty silly in cases where the
RAISE is nested in several levels of function.  What's more, it violated
our design theory that verbosity of error reports should be controlled
on the client side not the server side.

To alleviate the resulting noise increase, introduce a feature in libpq
and psql whereby the CONTEXT field of messages can be suppressed, either
always or only for non-error messages.  Printing CONTEXT for errors only
is now their default behavior.

The actual code changes here are pretty small, but the effects on the
regression test outputs are widespread.  I had to edit some of the
alternative expected outputs by hand; hopefully the buildfarm will soon
find anything I fat-fingered.

In passing, fix up (again) the output line counts in psql's various
help displays.  Add some commentary about how to verify them.

Pavel Stehule, reviewed by Petr Jelínek, Jeevan Chalke, and others
2015-09-05 11:58:33 -04:00
Tom Lane dabda64152 Fix volatile-safety issue in dblink's materializeQueryResult().
Some fields of the sinfo struct are modified within PG_TRY and then
referenced within PG_CATCH, so as with recent patch to async.c, "volatile"
is necessary for strict POSIX compliance; and that propagates to a couple
of subroutines as well as materializeQueryResult() itself.  I think the
risk of actual issues here is probably higher than in async.c, because
storeQueryResult() is likely to get inlined into materializeQueryResult(),
leaving the compiler free to conclude that its stores into sinfo fields are
dead code.
2015-01-26 15:17:33 -05:00
Bruce Momjian 4baaf863ec Update copyright for 2015
Backpatch certain files through 9.0
2015-01-06 11:43:47 -05:00
Peter Eisentraut 1e95bbc870 Fix SHLIB_PREREQS use in contrib, allowing PGXS builds
dblink and postgres_fdw use SHLIB_PREREQS = submake-libpq to build libpq
first.  This doesn't work in a PGXS build, because there is no libpq to
build.  So just omit setting SHLIB_PREREQS in this case.

Note that PGXS users can still use SHLIB_PREREQS (although it is not
documented).  The problem here is only that contrib modules can be built
in-tree or using PGXS, and the prerequisite is only applicable in the
former case.

Commit 6697aa2bc2 previously attempted to
address this by creating a somewhat fake submake-libpq target in
Makefile.global.  That was not the right fix, and it was also done in a
nonportable way, so revert that.
2014-12-04 07:58:12 -05:00
Robert Haas f5d9698a84 Add infrastructure to save and restore GUC values.
This is further infrastructure for parallelism.

Amit Khandekar, Noah Misch, Robert Haas
2014-11-24 16:37:56 -05:00
Andres Freund 57ca1d4f01 Specify the port in dblink and postgres_fdw tests.
That allows to run those tests against a postmaster listening on a
nonstandard port without requiring to export PGPORT in postmaster's
environment.

This still doesn't support connecting to a nondefault host without
configuring it in postmaster's environment. That's harder and less
frequently used though. So this is a useful step.
2014-08-26 12:28:08 +02:00
Andres Freund ddc2504dbc Don't hardcode contrib_regression dbname in postgres_fdw and dblink tests.
That allows parallel installcheck to succeed inside contrib/. The
output is not particularly pretty unless make's -O option to
synchronize the output is used.

There's other tests, outside contrib, that use a hardcoded,
non-unique, database name. Those prohibit paralell installcheck to be
used across more directories; but that's something for a separate
patch.
2014-08-26 12:27:26 +02:00
Andres Freund d153b80161 Fix typos in some error messages thrown by extension scripts when fed to psql.
Some of the many error messages introduced in 458857cc missed 'FROM
unpackaged'. Also e016b724 and 45ffeb7e forgot to quote extension
version numbers.

Backpatch to 9.1, just like 458857cc which introduced the messages. Do
so because the error messages thrown when the wrong command is copy &
pasted aren't easy to understand.
2014-08-25 18:30:37 +02:00
Noah Misch d7cdf6ee36 Diagnose incompatible OpenLDAP versions during build and test.
With OpenLDAP versions 2.4.24 through 2.4.31, inclusive, PostgreSQL
backends can crash at exit.  Raise a warning during "configure" based on
the compile-time OpenLDAP version number, and test the crash scenario in
the dblink test suite.  Back-patch to 9.0 (all supported versions).
2014-07-22 11:01:03 -04:00
Noah Misch 0ffc201a51 Add file version information to most installed Windows binaries.
Prominent binaries already had this metadata.  A handful of minor
binaries, such as pg_regress.exe, still lack it; efforts to eliminate
such exceptions are welcome.

Michael Paquier, reviewed by MauMau.
2014-07-14 14:07:52 -04:00
Joe Conway 1dde5782e3 Clean up data conversion short-lived memory context.
dblink uses a short-lived data conversion memory context. However it
was not deleted when no longer needed, leading to a noticeable memory
leak under some circumstances. Plug the hole, along with minor
refactoring. Backpatch to 9.2 where the leak was introduced.

Report and initial patch by MauMau. Reviewed/modified slightly by
Tom Lane and me.
2014-06-20 12:24:59 -07:00
Noah Misch c82725edfa Let installcheck-world pass against a server requiring a password.
Give passwords to each user created in support of an ECPG connection
test case.  Use SET SESSION AUTHORIZATION, not a fresh connection, to
reduce privileges during a dblink test case.

To test against such a server, both the "make installcheck-world"
environment and the postmaster environment must provide the default
user's password; $PGPASSFILE is the principal way to do so.  (The
postmaster environment needs it for dblink and postgres_fdw tests.)
2014-06-19 21:41:26 -04:00
Bruce Momjian 0a78320057 pgindent run for 9.4
This includes removing tabs after periods in C comments, which was
applied to back branches, so this change should not effect backpatching.
2014-05-06 12:12:18 -04:00
Bruce Momjian 7e04792a1c Update copyright for 2014
Update all files in head, and files COPYRIGHT and legal.sgml in all back
branches.
2014-01-07 16:05:30 -05:00
Peter Eisentraut edc43458d7 Add more use of psprintf() 2014-01-06 21:30:26 -05:00
Joe Conway d6ca510d9d Fix performance regression in dblink connection speed.
Previous commit e5de601267 modified dblink
to ensure client encoding matched the server. However the added
PQsetClientEncoding() call added significant overhead. Restore original
performance in the common case where client encoding already matches
server encoding by doing nothing in that case. Applies to all active
branches.

Issue reported and work sponsored by Zonar Systems.
2013-12-07 17:00:26 -08:00
Robert Haas cacbdd7810 Use appendStringInfoString instead of appendStringInfo where possible.
This shaves a few cycles, and generally seems like good programming
practice.

David Rowley
2013-10-31 10:55:59 -04:00
Fujii Masao 6f9e39bc99 Fix typo in update scripts for some contrib modules. 2013-07-19 04:13:01 +09:00
Robert Haas 568d4138c6 Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row.  In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result.  This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.

The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow.  However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads.  To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed.  The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all.  Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.

Patch by me.  Review by Michael Paquier and Andres Freund.
2013-07-02 09:47:01 -04:00
Bruce Momjian 9af4159fce pgindent run for release 9.3
This is the first run of the Perl-based pgindent script.  Also update
pgindent instructions.
2013-05-29 16:58:43 -04:00
Tom Lane 8a3b6772ae Fix contrib/dblink to handle inconsistent DateStyle/IntervalStyle safely.
If the remote database's settings of these GUCs are different from ours,
ambiguous datetime values may be read incorrectly.  To fix, temporarily
adopt the remote server's settings while we ingest a query result.

This is not a complete fix, since it doesn't do anything about ambiguous
values in commands sent to the remote server; but there seems little we
can do about that end of it given dblink's entirely textual API for
transmitted commands.

Back-patch to 9.2.  The hazard exists in all versions, but this patch
would need more work to apply before 9.2.  Given the lack of field
complaints about this issue, it doesn't seem worth the effort at present.

Daniel Farina and Tom Lane
2013-03-22 15:22:54 -04:00
Bruce Momjian bd61a623ac Update copyrights for 2013
Fully update git head, and update back branches in ./COPYRIGHT and
legal.sgml files.
2013-01-01 17:15:01 -05:00
Andrew Dunstan ad69bd052f Add mode where contrib installcheck runs each module in a separately named database.
Normally each module is tested in a database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This new mode, enabled by adding USE_MODULE_DB=1 to the make command
line, runs most modules in a database with the module name embedded in
it.

This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.

Second attempt at this, this time accomodating make versions older
than 3.82.

Still to be done: adapt to the MSVC build system.

Backpatch to 9.0, which is the earliest version it is reasonably
possible to test upgrading from.
2012-12-11 11:52:45 -05:00
Andrew Dunstan fc5c1bbbeb Revert "Add mode where contrib installcheck runs each module in a separately named database."
This reverts commit e2b3c21b05.
2012-12-03 15:00:51 -05:00
Andrew Dunstan e2b3c21b05 Add mode where contrib installcheck runs each module in a separately named database.
Normally each module is tested in aq database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This mode, enabled by adding USE_MODULE_DB=1 to the make command line,
runs most modules in a database with the module name embedded in it.

This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.

Still to be done: adapt to the MSVC build system.

Backpatch to 9.0, which is the earliest version it is reasonably possible
to test upgrading from.
2012-12-02 17:20:38 -05:00
Tom Lane c3bf3ea2b6 Remove configure-option-dependent test cases from dblink tests.
The HINTs generated for these error cases vary across builds.  We
could try to work around that, but the test cases aren't really useful
enough to justify taking any trouble.

Per buildfarm.
2012-10-10 20:14:26 -04:00
Tom Lane 8255566f9d Create an improved FDW option validator function for contrib/dblink.
dblink now has its own validator function dblink_fdw_validator(), which is
better than the core function postgresql_fdw_validator() because it gets
the list of legal options from libpq instead of having a hard-wired list.

Make the dblink extension module provide a standard foreign data wrapper
dblink_fdw that encapsulates use of this validator, and recommend use of
that wrapper instead of making up wrappers on the fly.

Unfortunately, because ad-hoc wrappers *were* recommended practice
previously, it's not clear when we can get rid of postgresql_fdw_validator
without causing upgrade problems.  But this is a step in the right
direction.

Shigeru Hanada, reviewed by KaiGai Kohei
2012-10-10 16:53:08 -04:00
Alvaro Herrera c219d9b0a5 Split tuple struct defs from htup.h to htup_details.h
This reduces unnecessary exposure of other headers through htup.h, which
is very widely included by many files.

I have chosen to move the function prototypes to the new file as well,
because that means htup.h no longer needs to include tupdesc.h.  In
itself this doesn't have much effect in indirect inclusion of tupdesc.h
throughout the tree, because it's also required by execnodes.h; but it's
something to explore in the future, and it seemed best to do the htup.h
change now while I'm busy with it.
2012-08-30 16:52:35 -04:00
Alvaro Herrera 9df55c8c3f Fix assorted compilation failures in contrib
Evidently I failed to test a compile after my earlier header shuffling.
2012-08-28 23:50:49 -04:00
Tom Lane 41b9c8452b Replace libpq's "row processor" API with a "single row" mode.
After taking awhile to digest the row-processor feature that was added to
libpq in commit 92785dac2e, we've concluded
it is over-complicated and too hard to use.  Leave the core infrastructure
changes in place (that is, there's still a row processor function inside
libpq), but remove the exposed API pieces, and instead provide a "single
row" mode switch that causes PQgetResult to return one row at a time in
separate PGresult objects.

This approach incurs more overhead than proper use of a row processor
callback would, since construction of a PGresult per row adds extra cycles.
However, it is far easier to use and harder to break.  The single-row mode
still affords applications the primary benefit that the row processor API
was meant to provide, namely not having to accumulate large result sets in
memory before processing them.  Preliminary testing suggests that we can
probably buy back most of the extra cycles by micro-optimizing construction
of the extra results, but that task will be left for another day.

Marko Kreen
2012-08-02 13:10:30 -04:00
Robert Haas d7c734841b Reduce messages about implicit indexes and sequences to DEBUG1.
Per recent discussion on pgsql-hackers, these messages are too
chatty for most users.
2012-07-04 20:35:29 -04:00
Bruce Momjian 927d61eeff Run pgindent on 9.2 source tree in preparation for first 9.3
commit-fest.
2012-06-10 15:20:04 -04:00
Robert Haas e4f06b70c9 Another typographical correction.
Noted by Guillaume Smet.
2012-04-24 08:15:45 -04:00
Robert Haas 5d4b60f2f2 Lots of doc corrections.
Josh Kupershmidt
2012-04-23 22:43:09 -04:00
Tom Lane 6f922ef88e Improve efficiency of dblink by using libpq's new row processor API.
This patch provides a test case for libpq's row processor API.
contrib/dblink can deal with very large result sets by dumping them into
a tuplestore (which can spill to disk) --- but until now, the intermediate
storage of the query result in a PGresult meant memory bloat for any large
result.  Now we use a row processor to convert the data to tuple form and
dump it directly into the tuplestore.

A limitation is that this only works for plain dblink() queries, not
dblink_send_query() followed by dblink_get_result().  In the latter
case we don't know the desired tuple rowtype soon enough.  While hack
solutions to that are possible, a different user-level API would
probably be a better answer.

Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
2012-04-04 18:39:08 -04:00
Tom Lane d843ed2116 Fix a couple of contrib/dblink bugs.
dblink_exec leaked temporary database connections if any error occurred
after connection setup, for example
	SELECT dblink_exec('...connect string...', 'select 1/0');
Add a PG_TRY block to ensure PQfinish gets done when it is needed.
(dblink_record_internal is on the hairy edge of needing similar treatment,
but seems not to be actively broken at the moment.)

Also, in 9.0 and up, only one of the three functions using tuplestore
return mode was properly checking that the query context would allow
a tuplestore result.

Noted while reviewing dblink patch.  Back-patch to all supported branches.
2012-04-03 20:43:15 -04:00
Tom Lane b75fbe9191 Fix dblink's failure to report correct connection name in error messages.
The DBLINK_GET_CONN and DBLINK_GET_NAMED_CONN macros did not set the
surrounding function's conname variable, causing errors to be incorrectly
reported as having occurred on the "unnamed" connection in some cases.
This bug was actually visible in two cases in the regression tests,
but apparently whoever added those cases wasn't paying attention.

Noted by Kyotaro Horiguchi, though this is different from his proposed
patch.

Back-patch to 8.4; 8.3 does not have the same type of error reporting
so the patch is not relevant.
2012-03-29 17:52:28 -04:00
Bruce Momjian e126958c2e Update copyright notices for year 2012. 2012-01-01 18:01:58 -05:00
Tom Lane 458857cc9d Throw a useful error message if an extension script file is fed to psql.
We have seen one too many reports of people trying to use 9.1 extension
files in the old-fashioned way of sourcing them in psql.  Not only does
that usually not work (due to failure to substitute for MODULE_PATHNAME
and/or @extschema@), but if it did work they'd get a collection of loose
objects not an extension.  To prevent this, insert an \echo ... \quit
line that prints a suitable error message into each extension script file,
and teach commands/extension.c to ignore lines starting with \echo.
That should not only prevent any adverse consequences of loading a script
file the wrong way, but make it crystal clear to users that they need to
do it differently now.

Tom Lane, following an idea of Andrew Dunstan's.  Back-patch into 9.1
... there is not going to be much value in this if we wait till 9.2.
2011-10-12 15:45:03 -04:00
Bruce Momjian 6416a82a62 Remove unnecessary #include references, per pgrminclude script. 2011-09-01 10:04:27 -04:00
Tom Lane 1af37ec96d Replace errdetail("%s", ...) with errdetail_internal("%s", ...).
There may be some other places where we should use errdetail_internal,
but they'll have to be evaluated case-by-case.  This commit just hits
a bunch of places where invoking gettext is obviously a waste of cycles.
2011-07-16 14:22:18 -04:00
Alvaro Herrera b93f5a5673 Move Trigger and TriggerDesc structs out of rel.h into a new reltrigger.h
This lets us stop including rel.h into execnodes.h, which is a widely
used header.
2011-07-04 14:35:58 -04:00
Joe Conway 8af3596d6b Async dblink functions require a named connection, and therefore should
use DBLINK_GET_NAMED_CONN rather than DBLINK_GET_CONN.
Problem found by Peter Eisentraut and patch by Fujii Masao.
2011-06-25 15:58:07 -07:00
Peter Eisentraut f8ebe3bcc5 Support "make check" in contrib
Added a new option --extra-install to pg_regress to arrange installing
the respective contrib directory into the temporary installation.
This is currently not yet supported for Windows MSVC builds.

Updated the .gitignore files for contrib modules to ignore the
leftovers of a temp-install check run.

Changed the exit status of "make check" in a pgxs build (which still
does nothing) to 0 from 1.

Added "make check" in contrib to top-level "make check-world".
2011-04-25 22:27:11 +03:00
Tom Lane 029fac2264 Avoid use of CREATE OR REPLACE FUNCTION in extension installation files.
It was never terribly consistent to use OR REPLACE (because of the lack of
comparable functionality for data types, operators, etc), and
experimentation shows that it's now positively pernicious in the extension
world.  We really want a failure to occur if there are any conflicts, else
it's unclear what the extension-ownership state of the conflicted object
ought to be.  Most of the time, CREATE EXTENSION will fail anyway because
of conflicts on other object types, but an extension defining only
functions can succeed, with bad results.
2011-02-13 22:54:52 -05:00
Tom Lane 629b3af27d Convert contrib modules to use the extension facility.
This isn't fully tested as yet, in particular I'm not sure that the
"foo--unpackaged--1.0.sql" scripts are OK.  But it's time to get some
buildfarm cycles on it.

sepgsql is not converted to an extension, mainly because it seems to
require a very nonstandard installation process.

Dimitri Fontaine and Tom Lane
2011-02-13 22:54:49 -05:00
Bruce Momjian 5d950e3b0c Stamp copyrights for year 2011. 2011-01-01 13:18:15 -05:00
Itagaki Takahiro d3c1265443 Don't raise "identifier will be truncated" messages in dblink
except creating new connections.
2010-11-25 19:40:58 +09:00
Peter Eisentraut fc946c39ae Remove useless whitespace at end of lines 2010-11-23 22:34:55 +02:00
Robert Haas 4343c0e546 Expose quote_literal_cstr() from core.
This eliminates the need for inefficient implementions of this
functionality in both contrib/dblink and contrib/tablefunc, so remove
them.  The upcoming patch implementing an in-core format() function
will also require this functionality.

In passing, add some regression tests.
2010-11-20 10:04:48 -05:00
Peter Eisentraut 19e231bbda Improved parallel make support
Replace for loops in makefiles with proper dependencies.  Parallel
make can now span across directories.  Also, make -k and make -q work
properly.

GNU make 3.80 or newer is now required.
2010-11-12 22:15:16 +02:00
Tom Lane cc2c8152e6 Some more gitignore cleanups: cover contrib and PL regression test outputs.
Also do some further work in the back branches, where quite a bit wasn't
covered by Magnus' original back-patch.
2010-09-22 17:22:40 -04:00
Magnus Hagander fe9b36fd59 Convert cvsignore to gitignore, and add .gitignore for build targets. 2010-09-22 12:57:04 +02:00
Magnus Hagander 9f2e211386 Remove cvs keywords from all files. 2010-09-20 22:08:53 +02:00
Bruce Momjian 239d769e7e pgindent run for 9.0, second run 2010-07-06 19:19:02 +00:00
Tom Lane c0989c67fa Change the interpretation of the primary_key_attnums parameter of
dblink_build_sql_insert() and related functions.  Now the column numbers
are treated as logical not physical column numbers.  This will provide saner
behavior in the presence of dropped columns; furthermore, if we ever get
around to allowing rearrangement of logical column ordering, the original
definition would become nearly untenable from a usability standpoint.
Per recent discussion of dblink's handling of dropped columns.
Not back-patched for fear of breaking existing applications.
2010-06-15 20:29:01 +00:00
Tom Lane 3b3706d2cf Fix dblink_build_sql_insert() and related functions to handle dropped
columns correctly.  In passing, get rid of some dead logic in the
underlying get_sql_insert() etc functions --- there is no caller that
will pass null value-arrays to them.

Per bug report from Robert Voinea.
2010-06-15 19:04:15 +00:00
Tom Lane 48a7245391 Consolidate and improve checking of key-column-attnum arguments for
dblink_build_sql_insert() and related functions.  In particular, be sure to
reject references to dropped and out-of-range column numbers.  The numbers
are still interpreted as physical column numbers, though, for backward
compatibility.

This patch replaces Joe's patch of 2010-02-03, which handled only some aspects
of the problem.
2010-06-15 16:22:19 +00:00
Tom Lane 6bbaa3148d Rearrange dblink's dblink_build_sql_insert() and related routines to open and
lock the target relation just once per SQL function call.  The original coding
obtained and released lock several times per call.  Aside from saving a
not-insignificant number of cycles, this eliminates possible race conditions
if someone tries to modify the relation's schema concurrently.  Also
centralize locking and permission-checking logic.

Problem noted while investigating a trouble report from Robert Voinea --- his
problem is still to be fixed, though.
2010-06-14 20:49:33 +00:00
Itagaki Takahiro 5a41c317c5 Avoid "identifier will be truncated" warning in dblink
when connection string is longer than NAMEDATALEN.
The previous fix for long connection name broke the behavior.
2010-06-09 03:39:26 +00:00
Itagaki Takahiro a624356072 Fix connection leak in dblink when dblink_connect() or dblink_connect_u()
end with "duplicate connection name" errors.

Backported to release 7.4.
2010-06-09 00:56:02 +00:00
Teodor Sigaev 3e7451a6df Add missed function dblink_connect_u(text[,text]) to uninstall script 2010-06-07 15:14:36 +00:00
Itagaki Takahiro 7dff7260af Fix dblink to treat connection names longer than NAMEDATALEN-2 (62 bytes).
Now long names are adjusted with truncate_identifier() and NOTICE messages
are raised if names are actually truncated.

Backported to release 8.0.
2010-06-03 09:38:33 +00:00
Bruce Momjian 65e806cba1 pgindent run for 9.0 2010-02-26 02:01:40 +00:00
Itagaki Takahiro 912eb88c7d Remove useless codes to initialize TupleDesc from dblink_exec. 2010-02-24 05:20:49 +00:00
Robert Haas e26c539e9f Wrap calls to SearchSysCache and related functions using macros.
The purpose of this change is to eliminate the need for every caller
of SearchSysCache, SearchSysCacheCopy, SearchSysCacheExists,
GetSysCacheOid, and SearchSysCacheList to know the maximum number
of allowable keys for a syscache entry (currently 4).  This will
make it far easier to increase the maximum number of keys in a
future release should we choose to do so, and it makes the code
shorter, too.

Design and review by Tom Lane.
2010-02-14 18:42:19 +00:00
Joe Conway fdac8cf998 Check to ensure the number of primary key fields supplied does not
exceed the total number of non-dropped source table fields for
dblink_build_sql_*(). Addresses bug report from Rushabh Lathia.

Backpatch all the way to the 7.3 branch.
2010-02-03 23:01:11 +00:00
Joe Conway 56cbb611ec Rewrite dblink_record_internal() and dblink_fetch() to use a tuplestore
(SFRM_Materialize mode) to return tuples. Since we don't return from the
dblink function in tuplestore mode, release the PGresult with a PG_CATCH
block on error. Also rearrange to share the same code to materialize the
tuplestore. Patch by Takahiro Itagaki.
2010-01-24 22:19:38 +00:00
Bruce Momjian 0239800893 Update copyright for the year 2010. 2010-01-02 16:58:17 +00:00
Heikki Linnakangas 84d723b6ce Previous fix for temporary file management broke returning a set from
PL/pgSQL function within an exception handler. Make sure we use the right
resource owner when we create the tuplestore to hold returned tuples.

Simplify tuplestore API so that the caller doesn't need to be in the right
memory context when calling tuplestore_put* functions. tuplestore.c
automatically switches to the memory context used when the tuplestore was
created. Tuplesort was already modified like this earlier. This patch also
removes the now useless MemoryContextSwitch calls from callers.

Report by Aleksei on pgsql-bugs on Dec 22 2009. Backpatch to 8.1, like
the previous patch that broke this.
2009-12-29 17:40:59 +00:00
Tom Lane 1753337cf5 Improve psql's tabular display of wrapped-around data by inserting markers
in the formerly-always-blank columns just to left and right of the data.
Different marking is used for a line break caused by a newline in the data
than for a straight wraparound.  A newline break is signaled by a "+" in the
right margin column in ASCII mode, or a carriage return arrow in UNICODE mode.
Wraparound is signaled by a dot in the right margin as well as the following
left margin in ASCII mode, or an ellipsis symbol in the same places in UNICODE
mode.  "\pset linestyle old-ascii" is added to make the previous behavior
available if anyone really wants it.

In passing, this commit also cleans up a few regression test files that
had unintended spacing differences from the current actual output.

Roger Leigh, reviewed by Gabrielle Roth and other members of PDXPUG.
2009-11-22 05:20:41 +00:00
Joe Conway d6119d8091 plug dblink resource leak
dblink generates orphaned connections when called with a connection string,
fail_on_error = true, and an ERROR occurs. Discovery and patch by
Tatsuhito Kasahara. Introduced in 8.4.
2009-09-12 23:20:52 +00:00
Joe Conway f4095b4c4b Implement dblink_get_notify().
Adds the ability to retrieve async notifications using dblink,
via the addition of the function dblink_get_notify(). Original patch
by Marcus Kempe, suggestions by Tom Lane and Alvaro Herrera, patch
review and adjustments by Joe Conway.
2009-08-05 16:11:07 +00:00
Bruce Momjian d747140279 8.4 pgindent run, with new combined Linux/FreeBSD/MinGW typedef list
provided by Andrew.
2009-06-11 14:49:15 +00:00
Tom Lane f3e122fcdf Restore dblink_current_query() to being a C-language function, so as to not
create an ABI break between 8.3 and 8.4.  It is still just a wrapper around
the built-in current_query() function, but at a different implementation
level.  Per my proposal.

Note: this change doesn't break 8.4beta installations, since their
SQL-language definition of the function still works fine.
2009-06-09 17:41:02 +00:00
Joe Conway e5de601267 Default client encoding to server encoding for dblink connections. Addresses
issue raised by Ruzsinszky Attila and confirmed by others.

 ----------------------------------------------------------------------
2009-06-09 16:35:36 +00:00
Joe Conway 4334695b30 Add support for using SQL/MED compliant FOREIGN DATA WRAPPER, SERVER,
and USER MAPPING as method to supply dblink connect parameters. Per
mailing list and PGCon discussions.
2009-06-06 21:27:56 +00:00
Joe Conway db02073305 Fix dblink_get_result() as reported by Oleksiy Shchukin. Refactor a bit
while we're at it per request by Tom Lane. Specifically, don't try to
perform dblink_send_query() via dblink_record_internal() -- it was
inappropriate and ugly.
2009-06-02 03:21:56 +00:00
Bruce Momjian 511db38ace Update copyright for 2009. 2009-01-01 17:24:05 +00:00
Tom Lane 76cc2fe6a1 Fix dblink and tablefunc to not return with the wrong CurrentMemoryContext.
Per buildfarm results.
2008-11-30 23:23:52 +00:00
Tom Lane cae7ad906a Fix dblink_connect() so that it verifies that a password is supplied in the
conninfo string *before* trying to connect to the remote server, not after.
As pointed out by Marko Kreen, in certain not-very-plausible situations
this could result in sending a password from the postgres user's .pgpass file,
or other places that non-superusers shouldn't have access to, to an
untrustworthy remote server.  The cleanest fix seems to be to expose libpq's
conninfo-string-parsing code so that dblink can check for a password option
without duplicating the parsing logic.

Joe Conway, with a little cleanup by Tom Lane
2008-09-22 13:55:14 +00:00
Joe Conway c0241b9573 When an ERROR happens on a dblink remote connection, take
pains to pass the ERROR message components locally, including
using the passed SQLSTATE. Also wrap the passed info in an
appropriate CONTEXT message. Addresses complaint by Henry
Combrinck. Joe Conway, with much good advice from Tom Lane.
2008-07-03 03:56:57 +00:00
Tom Lane 2604359251 Improve hash_any() to use word-wide fetches when hashing suitably aligned
data.  This makes for a significant speedup at the cost that the results
now vary between little-endian and big-endian machines; which forces us
to add explicit ORDER BYs in a couple of regression tests to preserve
machine-independent comparison results.  Also, force initdb by bumping
catversion, since the contents of hash indexes will change (at least on
big-endian machines).

Kenneth Marshall and Tom Lane, based on work from Bob Jenkins.  This commit
does not adopt Bob's new faster mix() algorithm, however, since we still need
to convince ourselves that that doesn't degrade the quality of the hashing.
2008-04-06 16:54:49 +00:00
Bruce Momjian 470b2dc35d Make dblink_current_query() reference pg_catalog.current_query(), per Tom. 2008-04-05 02:44:42 +00:00
Bruce Momjian 67fe107554 Re-add dblink_current_query() for backward compatibility. 2008-04-05 02:26:14 +00:00
Bruce Momjian 4f22db501e Remove unneed #include now that current_query() has moved to the backend. 2008-04-04 17:02:56 +00:00
Bruce Momjian f96928fde9 Implement current_query(), that shows the currently executing query.
At the same time remove dblink/dblink_current_query() as it is no longer
necessary
*BACKWARD COMPATIBILITY ISSUE* for dblink

Tomas Doran
2008-04-04 16:57:21 +00:00
Alvaro Herrera 73b0300b2a Move the HTSU_Result enum definition into snapshot.h, to avoid including
tqual.h into heapam.h.  This makes all inclusion of tqual.h explicit.

I also sorted alphabetically the includes on some source files.
2008-03-26 21:10:39 +00:00
Tom Lane 220db7ccd8 Simplify and standardize conversions between TEXT datums and ordinary C
strings.  This patch introduces four support functions cstring_to_text,
cstring_to_text_with_len, text_to_cstring, and text_to_cstring_buffer, and
two macros CStringGetTextDatum and TextDatumGetCString.  A number of
existing macros that provided variants on these themes were removed.

Most of the places that need to make such conversions now require just one
function or macro call, in place of the multiple notational layers that used
to be needed.  There are no longer any direct calls of textout or textin,
and we got most of the places that were using handmade conversions via
memcpy (there may be a few still lurking, though).

This commit doesn't make any serious effort to eliminate transient memory
leaks caused by detoasting toasted text objects before they reach
text_to_cstring.  We changed PG_GETARG_TEXT_P to PG_GETARG_TEXT_PP in a few
places where it was easy, but much more could be done.

Brendan Jurd and Tom Lane
2008-03-25 22:42:46 +00:00
Tom Lane 66661858ae Use an indexscan not a heapscan to search pg_index in get_pkey_attnames.
Noted while looking for heapscans that might need to start from block
zero.
2008-01-14 02:49:47 +00:00