Commit Graph

33411 Commits

Author SHA1 Message Date
Simon Riggs
8cb53654db Add DROP INDEX CONCURRENTLY [IF EXISTS], uses ShareUpdateExclusiveLock 2012-04-06 10:21:40 +01:00
Robert Haas
21cc529698 checkopint -> checkpoint
Report by Guillaume Lelarge.
2012-04-05 21:37:33 -04:00
Robert Haas
662ca285a6 Put back code inadvertently deleted from exit_nicely.
Report by Andrew Dunstan.
2012-04-05 21:37:33 -04:00
Peter Eisentraut
05261ab624 NLS: Use msgmerge/xgettext --no-wrap and --sort-by-file
The option --no-wrap prevents wars with (most?) editors about proper
line wrapping.  --sort-by-file ensures consistent file order, for
easier diffing.
2012-04-05 22:28:13 +03:00
Robert Haas
bbc02243fc Allow pg_archivecleanup to strip optional file extensions.
Greg Smith and Jaime Casanova, reviewed by Alex Shulgin and myself.
e
2012-04-05 14:18:42 -04:00
Robert Haas
b736aef2ec Publish checkpoint timing information to pg_stat_bgwriter.
Greg Smith, Peter Geoghegan, and Robert Haas
2012-04-05 14:04:37 -04:00
Tom Lane
a75b08066a Update obsolete comment.
Somebody didn't bother to fix this comment while adding foreign table
support to the code below it.

In passing, remove the explicit calling-out of relkind letters, which adds
complexity to the comment but doesn't help in understanding the code.
2012-04-05 13:05:02 -04:00
Robert Haas
97e26dc66d Correctly explain units used by function-timing stats functions.
The views are in milliseconds, but the raw functions return
microseconds.
2012-04-05 11:50:06 -04:00
Robert Haas
644828908f Expose track_iotiming data via the statistics collector.
Ants Aasma's original patch to add timing information for buffer I/O
requests exposed this data at the relation level, which was judged too
costly.  I've here exposed it at the database level instead.
2012-04-05 11:40:24 -04:00
Tom Lane
05dbd4a773 Fix plpgsql named-cursor-parameter feature for variable name conflicts.
The parser got confused if a cursor parameter had the same name as
a plpgsql variable.  Reported and diagnosed by Yeb Havinga, though
this isn't exactly his proposed fix.

Also, some mostly-but-not-entirely-cosmetic adjustments to the original
named-cursor-parameter patch, for code readability and better error
diagnostics.
2012-04-04 21:50:31 -04:00
Tom Lane
6f922ef88e Improve efficiency of dblink by using libpq's new row processor API.
This patch provides a test case for libpq's row processor API.
contrib/dblink can deal with very large result sets by dumping them into
a tuplestore (which can spill to disk) --- but until now, the intermediate
storage of the query result in a PGresult meant memory bloat for any large
result.  Now we use a row processor to convert the data to tuple form and
dump it directly into the tuplestore.

A limitation is that this only works for plain dblink() queries, not
dblink_send_query() followed by dblink_get_result().  In the latter
case we don't know the desired tuple rowtype soon enough.  While hack
solutions to that are possible, a different user-level API would
probably be a better answer.

Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
2012-04-04 18:39:08 -04:00
Tom Lane
92785dac2e Add a "row processor" API to libpq for better handling of large results.
Traditionally libpq has collected an entire query result before passing
it back to the application.  That provides a simple and transactional API,
but it's pretty inefficient for large result sets.  This patch allows the
application to process each row on-the-fly instead of accumulating the
rows into the PGresult.  Error recovery becomes a bit more complex, but
often that tradeoff is well worth making.

Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
2012-04-04 18:27:56 -04:00
Tom Lane
cb917e1544 Remove useless PGRES_COPY_BOTH "support" in psql.
There is no existing or foreseeable case in which psql should see a
PGRES_COPY_BOTH PQresultStatus; and if such a case ever emerges, it's a
pretty good bet that these code fragments wouldn't do the right thing
anyway.  Remove them, and let the existing default cases do the appropriate
thing, namely emit an "unexpected PQresultStatus" bleat.

Noted while working on libpq row processor patch, for which I was
considering adding a PGRES_SUSPENDED status code --- the same default-case
treatment would be appropriate for that.
2012-04-04 16:15:04 -04:00
Tom Lane
c17e863bc7 Fix syslogger to not lose log coherency under high load.
The original coding of the syslogger had an arbitrary limit of 20 large
messages concurrently in progress, after which it would just punt and dump
message fragments to the output file separately.  Our ambitions are a bit
higher than that now, so allow the data structure to expand as necessary.

Reported and patched by Andrew Dunstan; some editing by Tom
2012-04-04 15:05:10 -04:00
Tom Lane
d843ed2116 Fix a couple of contrib/dblink bugs.
dblink_exec leaked temporary database connections if any error occurred
after connection setup, for example
	SELECT dblink_exec('...connect string...', 'select 1/0');
Add a PG_TRY block to ensure PQfinish gets done when it is needed.
(dblink_record_internal is on the hairy edge of needing similar treatment,
but seems not to be actively broken at the moment.)

Also, in 9.0 and up, only one of the three functions using tuplestore
return mode was properly checking that the query context would allow
a tuplestore result.

Noted while reviewing dblink patch.  Back-patch to all supported branches.
2012-04-03 20:43:15 -04:00
Robert Haas
5e86c61a7e Arrange for on_exit_nicely to be thread-safe.
Extracted from Joachim Wieland's parallel pg_dump patch, with some
additional comments by me.
2012-04-03 08:39:03 -04:00
Peter Eisentraut
38b9693fd9 Add support for renaming domain constraints 2012-04-03 08:11:51 +03:00
Peter Eisentraut
c2cc5c3474 NLS: Seed Language field in PO header
Use msgmerge --lang option to seed the Language field, recently
introduced by gettext, in the header of the new PO file.
2012-04-02 02:58:37 +03:00
Peter Eisentraut
5633df2582 Fix recently introduced typo in NLS file lists 2012-04-02 02:39:34 +03:00
Tom Lane
d5881c036a Fix O(N^2) behavior in pg_dump when many objects are in dependency loops.
Combining the loop workspace with the record of already-processed objects
might have been a cute trick, but it behaves horridly if there are many
dependency loops to repair: the time spent in the first step of findLoop()
grows as O(N^2).  Instead use a separate flag array indexed by dump ID,
which we can check in constant time.  The length of the workspace array
is now never more than the actual length of a dependency chain, which
should be reasonably short in all cases of practical interest.  The code
is noticeably easier to understand this way, too.

Per gripe from Mike Roest.  Since this is a longstanding performance bug,
backpatch to all supported versions.
2012-03-31 15:51:07 -04:00
Tom Lane
0d8117abef Fix O(N^2) behavior in pg_dump for large numbers of owned sequences.
The loop that matched owned sequences to their owning tables required time
proportional to number of owned sequences times number of tables; although
this work was only expended in selective-dump situations, which is probably
why the issue wasn't recognized long since.  Refactor slightly so that we
can perform this work after the index array for findTableByOid has been
set up, reducing the time to O(M log N).

Per gripe from Mike Roest.  Since this is a longstanding performance bug,
backpatch to all supported versions.
2012-03-31 14:42:17 -04:00
Tom Lane
c252a17d82 Rename frontend keyword arrays to avoid conflict with backend.
ecpg and pg_dump each contain keyword arrays with structure similar
to the backend's keyword array.  Up to now, we actually named those
arrays the same as the backend's and relied on parser/keywords.h
to declare them.  This seems a tad too cute, though, and it breaks
now that we need to PGDLLIMPORT-decorate the backend symbols.
Rename to avoid the problem.  Per buildfarm.

(It strikes me that maybe we should get rid of the separate keywords.c
files altogether, and just define these arrays in the modules that use
them, but that's a rather more invasive change.)
2012-03-31 13:15:53 -04:00
Tom Lane
a52e6fe7bc Fix glitch recently introduced in psql tab completion.
Over-optimization (by me, looks like :-() broke the case of recognizing
a word boundary just before a quoted identifier.  Reported and diagnosed
by Dean Rasheed.
2012-03-31 11:19:23 -04:00
Tom Lane
5e83854d71 Add PGDLLIMPORT to ScanKeywords and NumScanKeywords.
Per buildfarm, this is now needed by contrib/pg_stat_statements.
2012-03-31 10:56:21 -04:00
Peter Eisentraut
194b5ea3d0 Add new files to NLS file lists
Some of these are newly added, some are older and were forgotten, some
don't contain any translatable strings right now but look like they
could in the future.
2012-03-30 20:46:23 +03:00
Peter Eisentraut
1d1361b6b7 Replace printf format %i by %d
see also ce8d7bb644
2012-03-30 20:40:46 +03:00
Peter Eisentraut
6ca365bff2 pgxs: Supply default values for BISON and FLEX variables
Otherwise, the availability of these variables depends on what
happened to be available at the time the PostgreSQL build was
configured.
2012-03-30 20:36:49 +03:00
Robert Haas
3f427c1353 pg_test_timing: Lame hack to work around compiler warning.
Fujii Masao, plus a comment by me.  While I'm at it, correctly tabify
this chunk of code.
2012-03-30 08:17:54 -04:00
Tom Lane
b75fbe9191 Fix dblink's failure to report correct connection name in error messages.
The DBLINK_GET_CONN and DBLINK_GET_NAMED_CONN macros did not set the
surrounding function's conname variable, causing errors to be incorrectly
reported as having occurred on the "unnamed" connection in some cases.
This bug was actually visible in two cases in the regression tests,
but apparently whoever added those cases wasn't paying attention.

Noted by Kyotaro Horiguchi, though this is different from his proposed
patch.

Back-patch to 8.4; 8.3 does not have the same type of error reporting
so the patch is not relevant.
2012-03-29 17:52:28 -04:00
Tom Lane
566a1d43cf Improve contrib/pg_stat_statements' handling of PREPARE/EXECUTE statements.
It's actually more useful for the module to ignore these.  Ignoring
EXECUTE (and not incrementing the nesting level) allows the executor
hooks to charge the time to the underlying prepared query, which
shows up as a stats entry with the original PREPARE as query string
(possibly modified by suppression of constants, which might not be
terribly useful here but it's not worth avoiding).  This is much more
useful than cluttering the stats table with a distinct entry for each
textually distinct EXECUTE.

Experimentation with this idea shows that it's also preferable to ignore
PREPARE.  If we don't, we get two stats table entries, one with the query
string hash and one with the jumble-derived hash, but with the same visible
query string (modulo those constants).  This is confusing and not very
helpful, since the first entry will only receive costs associated with
initial planning of the query, which is not something counted at all
normally by pg_stat_statements.  (And if we do start tracking planning
costs, we'd want them blamed on the other hash table entry anyway.)
2012-03-29 16:42:09 -04:00
Tom Lane
e0e4ebe384 Improve handling of utility statements containing plannable statements.
When tracking nested statements, contrib/pg_stat_statements formerly
double-counted the execution costs of utility statements that directly
contain an executable statement, such as EXPLAIN and DECLARE CURSOR.
This was not obvious since the ProcessUtility and Executor hooks
would each add their measured costs to the same stats table entry.
However, with the new implementation that hashes utility and plannable
statements differently, this showed up as seemingly-duplicate stats
entries.  Fix that by disabling the Executor hooks when the query has a
queryId of zero, which was the case already for such statements but is now
more clearly specified in the code.  (The zero queryId was causing problems
anyway because all such statements would add to a single bogus entry.)

The PREPARE/EXECUTE case still results in counting the same execution
in two different stats table entries, but it should be much less surprising
to users that there are two entries in such cases.

In passing, include a CommonTableExpr's ctename in the query hash.
I had left it out originally on the grounds that we wanted to omit all
inessential aliases, but since RTE_CTE RTEs are hashing their referenced
names, we'd better hash the CTE names too to make sure we don't hash
semantically different queries the same.
2012-03-29 15:32:59 -04:00
Peter Eisentraut
2005b77b8f initdb: Mark more messages for translation
Some Windows-only messages had apparently been forgotten so far.

Also make the wording of the messages more consistent with similar
messages other parts, such as pg_ctl and pg_regress.
2012-03-29 21:16:28 +03:00
Simon Riggs
68219aaf6b Correct epoch of txid_current() when executed on a Hot Standby server.
Initialise ckptXidEpoch from starting checkpoint and maintain the correct
value as we roll forwards. This allows GetNextXidAndEpoch() to return the
correct epoch when executed during recovery. Backpatch to 9.0 when the
problem is first observable by a user.

Bug report from Daniel Farina
2012-03-29 14:55:30 +01:00
Andrew Dunstan
aeca650226 Unbreak Windows builds broken by pgpipe removal. 2012-03-29 04:11:57 -04:00
Heikki Linnakangas
5762a4d909 Inherit max_safe_fds to child processes in EXEC_BACKEND mode.
Postmaster sets max_safe_fds by testing how many open file descriptors it
can open, and that is normally inherited by all child processes at fork().
Not so on EXEC_BACKEND, ie. Windows, however. Because of that, we
effectively ignored max_files_per_process on Windows, and always assumed
a conservative default of 32 simultaneous open files. That could have an
impact on performance, if you need to access a lot of different files
in a query. After this patch, the value is passed to child processes by
save/restore_backend_variables() among many other global variables.

It has been like this forever, but given the lack of complaints about it,
I'm not backpatching this.
2012-03-29 08:19:11 +03:00
Andrew Dunstan
d2c1740dc2 Remove now redundant pgpipe code. 2012-03-28 23:24:07 -04:00
Tom Lane
7313cc0163 Improve contrib/pg_stat_statements to lump "similar" queries together.
pg_stat_statements now hashes selected fields of the analyzed parse tree
to assign a "fingerprint" to each query, and groups all queries with the
same fingerprint into a single entry in the pg_stat_statements view.
In practice it is expected that queries with the same fingerprint will be
equivalent except for values of literal constants.  To make the display
more useful, such constants are replaced by "?" in the displayed query
strings.

This mechanism currently supports only optimizable queries (SELECT,
INSERT, UPDATE, DELETE).  Utility commands are still matched on the
basis of their literal query strings.

There remain some open questions about how to deal with utility statements
that contain optimizable queries (such as EXPLAIN and SELECT INTO) and how
to deal with expiring speculative hashtable entries that are made to save
the normalized form of a query string.  However, fixing these issues should
require only localized changes, and since there are other open patches
involving contrib/pg_stat_statements, it seems best to go ahead and commit
what we've got.

Peter Geoghegan, reviewed by Daniel Farina
2012-03-28 21:01:23 -04:00
Peter Eisentraut
4e1c72079a Run maintainer-check on all PO files, not only configured ones
The intent is to allow configure --enable-nls=xx for installation
speed and size, but have maintainer-check check all source files
regardless.
2012-03-28 21:06:45 +03:00
Peter Eisentraut
03f0c08f47 Tweak markup to avoid extra whitespace in man pages 2012-03-28 20:37:19 +03:00
Robert Haas
7f63527c82 Attempt to unbreak pg_test_timing on Windows.
Per buildfarm, and Álvaro Herrera.
2012-03-28 12:22:57 -04:00
Robert Haas
ada763cfcd pg_basebackup: Error handling fixes.
Thomas Ogrisegg and Fujii Masao
2012-03-28 12:19:22 -04:00
Robert Haas
81f6bbe8ad pg_basebackup: Error message improvements.
Fujii Masao
2012-03-28 08:43:35 -04:00
Robert Haas
9c272da8c4 Doc fix for pg_test_timing.
Fujii Masao
2012-03-28 08:16:19 -04:00
Robert Haas
cee523867d pg_test_timing utility, to measure clock monotonicity and timing cost.
Ants Aasma, Greg Smith
2012-03-27 16:14:00 -04:00
Robert Haas
5b4f346611 Expose track_iotiming information via pg_stat_statements.
Ants Aasma, reviewed by Greg Smith, with very minor tweaks by me.
2012-03-27 15:18:49 -04:00
Tom Lane
5d3fcc4c2e Bend parse location rules for the convenience of pg_stat_statements.
Generally, the parse location assigned to a multiple-token construct is
the location of its leftmost token.  This commit breaks that rule for
the syntaxes TYPENAME 'LITERAL' and CAST(CONSTANT AS TYPENAME) --- the
resulting Const will have the location of the literal string, not the
typename or CAST keyword.  The cases where this matters are pretty thin on
the ground (no error messages in the regression tests change, for example),
and it's unlikely that any user would be confused anyway by an error cursor
pointing at the literal.  But still it's less than consistent.  The reason
for changing it is that contrib/pg_stat_statements wants to know the parse
location of the original literal, and it was agreed that this is the least
unpleasant way to preserve that information through parse analysis.

Peter Geoghegan
2012-03-27 15:17:41 -04:00
Tom Lane
a40fa613b5 Add some infrastructure for contrib/pg_stat_statements.
Add a queryId field to Query and PlannedStmt.  This is not used by the
core backend, except for being copied around at appropriate times.
It's meant to allow plug-ins to track a particular query forward from
parse analysis to execution.

The queryId is intentionally not dumped into stored rules (and hence this
commit doesn't bump catversion).  You could argue that choice either way,
but it seems better that stored rule strings not have any dependency
on plug-ins that might or might not be present.

Also, add a post_parse_analyze_hook that gets invoked at the end of
parse analysis (but only for top-level analysis of complete queries,
not cases such as analyzing a domain's default-value expression).
This is mainly meant to be used to compute and assign a queryId,
but it could have other applications.

Peter Geoghegan
2012-03-27 15:17:40 -04:00
Robert Haas
40b9b95769 New GUC, track_iotiming, to track I/O timings.
Currently, the only way to see the numbers this gathers is via
EXPLAIN (ANALYZE, BUFFERS), but the plan is to add visibility through
the stats collector and pg_stat_statements in subsequent patches.

Ants Aasma, reviewed by Greg Smith, with some further changes by me.
2012-03-27 14:55:02 -04:00
Tom Lane
98316e211b Silence compiler warning about uninitialized variable. 2012-03-27 14:47:23 -04:00
Peter Eisentraut
dd024c22f1 pg_dump: Small message adjustment for consistency 2012-03-27 19:52:39 +03:00