Commit Graph

10753 Commits

Author SHA1 Message Date
Bruce Momjian 17665f6115 Re-add documentation recommendation to use gzip/gunzip for archive file
storage.
2012-04-09 14:50:48 -04:00
Bruce Momjian 75fcb935bc Update documentation to more clearly label the streaming replication option. 2012-04-09 14:40:16 -04:00
Bruce Momjian 8213624fc9 Remove documentation mention of pglesslog, which was added in 2009
because there was only a beta for 9.0 and it does not compile on 9.1.
2012-04-09 14:12:38 -04:00
Heikki Linnakangas 140a4fbf1a Remove link to ODBCng project from the docs.
Thom Browne pointed out that the URL was out of date, and Devrim GÜNDÜZ
pointed out that the project isn't maintained anymore.
2012-04-09 09:47:25 +03:00
Tom Lane d75829a682 Update URL for pgtclng project.
Thom Brown
2012-04-06 19:00:14 -04:00
Peter Eisentraut 6c41948c39 Fix some typos in the documentation
Thom Brown
2012-04-06 23:55:24 +03:00
Peter Eisentraut 25028a27c8 Correct various system catalog/view definitions in the documentation
Thom Brown
2012-04-06 23:54:27 +03:00
Tom Lane cea49fe82f Dept of second thoughts: improve the API for AnalyzeForeignTable.
If we make the initially-called function return the table physical-size
estimate, acquire_inherited_sample_rows will be able to use that to
allocate numbers of samples among child tables, when the day comes that
we want to support foreign tables in inheritance trees.
2012-04-06 16:04:10 -04:00
Tom Lane 263d9de66b Allow statistics to be collected for foreign tables.
ANALYZE now accepts foreign tables and allows the table's FDW to control
how the sample rows are collected.  (But only manual ANALYZEs will touch
foreign tables, for the moment, since among other things it's not very
clear how to handle remote permissions checks in an auto-analyze.)

contrib/file_fdw is extended to support this.

Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
2012-04-06 15:02:35 -04:00
Simon Riggs 8cb53654db Add DROP INDEX CONCURRENTLY [IF EXISTS], uses ShareUpdateExclusiveLock 2012-04-06 10:21:40 +01:00
Robert Haas bbc02243fc Allow pg_archivecleanup to strip optional file extensions.
Greg Smith and Jaime Casanova, reviewed by Alex Shulgin and myself.
e
2012-04-05 14:18:42 -04:00
Robert Haas b736aef2ec Publish checkpoint timing information to pg_stat_bgwriter.
Greg Smith, Peter Geoghegan, and Robert Haas
2012-04-05 14:04:37 -04:00
Robert Haas 97e26dc66d Correctly explain units used by function-timing stats functions.
The views are in milliseconds, but the raw functions return
microseconds.
2012-04-05 11:50:06 -04:00
Robert Haas 644828908f Expose track_iotiming data via the statistics collector.
Ants Aasma's original patch to add timing information for buffer I/O
requests exposed this data at the relation level, which was judged too
costly.  I've here exposed it at the database level instead.
2012-04-05 11:40:24 -04:00
Tom Lane 6f922ef88e Improve efficiency of dblink by using libpq's new row processor API.
This patch provides a test case for libpq's row processor API.
contrib/dblink can deal with very large result sets by dumping them into
a tuplestore (which can spill to disk) --- but until now, the intermediate
storage of the query result in a PGresult meant memory bloat for any large
result.  Now we use a row processor to convert the data to tuple form and
dump it directly into the tuplestore.

A limitation is that this only works for plain dblink() queries, not
dblink_send_query() followed by dblink_get_result().  In the latter
case we don't know the desired tuple rowtype soon enough.  While hack
solutions to that are possible, a different user-level API would
probably be a better answer.

Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
2012-04-04 18:39:08 -04:00
Tom Lane 92785dac2e Add a "row processor" API to libpq for better handling of large results.
Traditionally libpq has collected an entire query result before passing
it back to the application.  That provides a simple and transactional API,
but it's pretty inefficient for large result sets.  This patch allows the
application to process each row on-the-fly instead of accumulating the
rows into the PGresult.  Error recovery becomes a bit more complex, but
often that tradeoff is well worth making.

Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
2012-04-04 18:27:56 -04:00
Peter Eisentraut 38b9693fd9 Add support for renaming domain constraints 2012-04-03 08:11:51 +03:00
Tom Lane 7313cc0163 Improve contrib/pg_stat_statements to lump "similar" queries together.
pg_stat_statements now hashes selected fields of the analyzed parse tree
to assign a "fingerprint" to each query, and groups all queries with the
same fingerprint into a single entry in the pg_stat_statements view.
In practice it is expected that queries with the same fingerprint will be
equivalent except for values of literal constants.  To make the display
more useful, such constants are replaced by "?" in the displayed query
strings.

This mechanism currently supports only optimizable queries (SELECT,
INSERT, UPDATE, DELETE).  Utility commands are still matched on the
basis of their literal query strings.

There remain some open questions about how to deal with utility statements
that contain optimizable queries (such as EXPLAIN and SELECT INTO) and how
to deal with expiring speculative hashtable entries that are made to save
the normalized form of a query string.  However, fixing these issues should
require only localized changes, and since there are other open patches
involving contrib/pg_stat_statements, it seems best to go ahead and commit
what we've got.

Peter Geoghegan, reviewed by Daniel Farina
2012-03-28 21:01:23 -04:00
Peter Eisentraut 03f0c08f47 Tweak markup to avoid extra whitespace in man pages 2012-03-28 20:37:19 +03:00
Robert Haas 9c272da8c4 Doc fix for pg_test_timing.
Fujii Masao
2012-03-28 08:16:19 -04:00
Robert Haas cee523867d pg_test_timing utility, to measure clock monotonicity and timing cost.
Ants Aasma, Greg Smith
2012-03-27 16:14:00 -04:00
Robert Haas 5b4f346611 Expose track_iotiming information via pg_stat_statements.
Ants Aasma, reviewed by Greg Smith, with very minor tweaks by me.
2012-03-27 15:18:49 -04:00
Robert Haas 40b9b95769 New GUC, track_iotiming, to track I/O timings.
Currently, the only way to see the numbers this gathers is via
EXPLAIN (ANALYZE, BUFFERS), but the plan is to add visibility through
the stats collector and pg_stat_statements in subsequent patches.

Ants Aasma, reviewed by Greg Smith, with some further changes by me.
2012-03-27 14:55:02 -04:00
Peter Eisentraut 206bec11bd Improve PL/Python database access function documentation
Organize the function descriptions as a list instead of running text,
for easier access.
2012-03-26 21:15:16 +03:00
Tom Lane 0339047bc9 Code review for protransform patches.
Fix loss of previous expression-simplification work when a transform
function fires: we must not simply revert to untransformed input tree.
Instead build a dummy FuncExpr node to pass to the transform function.
This has the additional advantage of providing a simpler, more uniform
API for transform functions.

Move documentation to a somewhat less buried spot, relocate some
poorly-placed code, be more wary of null constants and invalid typmod
values, add an opr_sanity check on protransform function signatures,
and some other minor cosmetic adjustments.

Note: although this patch touches pg_proc.h, no need for catversion
bump, because the changes are cosmetic and don't actually change the
intended catalog contents.
2012-03-23 17:29:57 -04:00
Robert Haas e08b4101e1 Clarify that PQconninfoParse returns an array with all legal options.
Per discussion with Dmitriy Igrishin and Tom Lane.
2012-03-22 22:09:20 -04:00
Robert Haas fdf719cd28 Doc clarifications regarding use of varlena.
Jay Levitt, reviewed by Tom Lane.
2012-03-22 15:45:42 -04:00
Robert Haas e83f827f5a Update docs on numeric storage requirements.
Since 9.1, the minimum overhead is three bytes, not five.

Fujii Masao
2012-03-22 15:40:27 -04:00
Tom Lane 04dfc87749 If a role has a password expiration date, show that in psql's \du output.
Per a suggestion from Euler Taveira, it seems like a good idea to include
this information in \du (and \dg) output.  This costs nothing for people
who are not using the VALID UNTIL feature, while for those who are, it's
rather critical information.

Fabrízio de Royes Mello
2012-03-22 02:08:25 -04:00
Tom Lane f2386d7136 Fix configure's search for collateindex.pl.
PGAC_PATH_COLLATEINDEX supposed that it could use AC_PATH_PROGS to search
for collateindex.pl, but that macro will only accept files that are marked
executable, and at least some DocBook installations don't mark the script
executable (a case the docs Makefile was already prepared for).  Accept the
script if it's present and readable in $DOCBOOKSTYLE/bin, and otherwise
search the PATH as before.

Having fixed that up, we don't need the fallback case that was in the docs
Makefile, and instead can throw an understandable error if configure didn't
find the script.  Per recent trouble report from John Lumby.
2012-03-22 00:46:03 -04:00
Robert Haas 5b9c1e6d52 Doc updates for index-only scans.
Document that routine vacuuming is now also important for the purpose
of index-only scans; and mention in the section that describes the
visibility map that it is used to implement index-only scans.

Marti Raudsepp, with some changes by me.
2012-03-21 14:52:44 -04:00
Tom Lane 64c604898e Improve the -l (limit) option recently added to contrib/vacuumlo.
Instead of just stopping after removing an arbitrary subset of orphaned
large objects, commit and start a new transaction after each -l objects.
This is just as effective as the original patch at limiting the number of
locks used, and it doesn't require doing the OID collection process
repeatedly to get everything.  Since the option no longer changes the
fundamental behavior of vacuumlo, and it avoids a known server-side
limitation, enable it by default (with a default limit of 1000 LOs per
transaction).

In passing, be more careful about properly quoting the names of tables
and fields, and do some other cosmetic cleanup.
2012-03-20 19:05:44 -04:00
Alvaro Herrera b251cf3193 Fix trigger example code to match header changes
I should have done this in b93f5a5673 but
didn't notice the problem at the time.

Per report from Marco Nenciarini
2012-03-20 16:50:18 -03:00
Alvaro Herrera 438269eb10 Update struct Trigger in docs 2012-03-20 16:18:59 -03:00
Robert Haas 0b43edbca7 Remove stray word from sepgsql documentation. 2012-03-20 15:10:33 -04:00
Peter Eisentraut 313d29f740 Add note about column privilege behavior to REVOKE reference page
suggested by Josh Berkus
2012-03-17 10:34:00 +02:00
Robert Haas 4012810a68 A couple more fixes for the sepgsql documentation. 2012-03-15 16:49:44 -04:00
Robert Haas d0ed9efdf7 Copy editing of sepgsql documentation. 2012-03-15 16:37:40 -04:00
Robert Haas 523176cbf1 sepgsql_setcon().
This is intended as infrastructure to allow sepgsql to cooperate with
connection pooling software, by allowing the effective security label
to be set for each new connection.

KaiGai Kohei, reviewed by Yeb Havinga.
2012-03-15 16:08:40 -04:00
Bruce Momjian 717f6d6085 In pg_upgrade, add various logging improvements:
add ability to control permissions of created files
	have psql echo its queries for easier debugging
	output four separate log files, and delete them on success
	add -r/--retain option to keep log files after success
	make logs file append-only
	remove -g/-G/-l logging options
	sugggest tailing appropriate log file on failure
	enhance -v/--verbose behavior
2012-03-12 19:48:03 -04:00
Bruce Momjian 9a39583264 Remove tabs in SGML files 2012-03-12 10:13:42 -04:00
Tom Lane 8142166162 Make parameter name consistent with syntax summary.
Thomas Hunger
2012-03-11 16:56:26 -04:00
Tom Lane 1e4964478a Fix documented type of t_infomask2.
Per Koizumi Satoru
2012-03-11 16:53:04 -04:00
Tom Lane c6a11b89e4 Teach SPGiST to store nulls and do whole-index scans.
This patch fixes the other major compatibility-breaking limitation of
SPGiST, that it didn't store anything for null values of the indexed
column, and so could not support whole-index scans or "x IS NULL"
tests.  The approach is to create a wholly separate search tree for
the null entries, and use fixed "allTheSame" insertion and search
rules when processing this tree, instead of calling the index opclass
methods.  This way the opclass methods do not need to worry about
dealing with nulls.

Catversion bump is for pg_am updates as well as the change in on-disk
format of SPGiST indexes; there are some tweaks in SPGiST WAL records
as well.

Heavily rewritten version of a patch by Oleg Bartunov and Teodor Sigaev.
(The original also stored nulls separately, but it reused GIN code to do
so; which required undesirable compromises in the on-disk format, and
would likely lead to bugs due to the GIN code being required to work in
two very different contexts.)
2012-03-11 16:29:59 -04:00
Michael Meskes fc227a4e3b Removed redundant "the" from ecpg's docs.
Typo spotted by Erik Rijkers.
2012-03-11 12:25:52 +01:00
Tatsuo Ishii da9e73a137 Add description for --no-locale and --text-search-config. 2012-03-11 19:16:15 +09:00
Tom Lane 03e56f798e Restructure SPGiST opclass interface API to support whole-index scans.
The original API definition was incapable of supporting whole-index scans
because there was no way to invoke leaf-value reconstruction without
checking any qual conditions.  Also, it was inefficient for
multiple-qual-condition scans because value reconstruction got done over
again for each qual condition, and because other internal work in the
consistent functions likewise had to be done for each qual.  To fix these
issues, pass the whole scankey array to the opclass consistent functions,
instead of only letting them see one item at a time.  (Essentially, the
loop over scankey entries is now inside the consistent functions not
outside them.  This makes the consistent functions a bit more complicated,
but not unreasonably so.)

In itself this commit does nothing except save a few cycles in
multiple-qual-condition index scans, since we can't support whole-index
scans on SPGiST indexes until nulls are included in the index.  However,
I consider this a must-fix for 9.2 because once we release it will get
very much harder to change the opclass API definition.
2012-03-10 18:36:49 -05:00
Peter Eisentraut 39d74e346c Add support for renaming constraints
reviewed by Josh Berkus and Dimitri Fontaine
2012-03-10 20:19:13 +02:00
Robert Haas e914a144d3 sepgsql DROP support.
KaiGai Kohei
2012-03-09 15:18:45 -05:00
Tom Lane b14953932d Revise FDW planning API, again.
Further reflection shows that a single callback isn't very workable if we
desire to let FDWs generate multiple Paths, because that forces the FDW to
do all work necessary to generate a valid Plan node for each Path.  Instead
split the former PlanForeignScan API into three steps: GetForeignRelSize,
GetForeignPaths, GetForeignPlan.  We had already bit the bullet of breaking
the 9.1 FDW API for 9.2, so this shouldn't cause very much additional pain,
and it's substantially more flexible for complex FDWs.

Add an fdw_private field to RelOptInfo so that the new functions can save
state there rather than possibly having to recalculate information two or
three times.

In addition, we'd not thought through what would be needed to allow an FDW
to set up subexpressions of its choice for runtime execution.  We could
treat ForeignScan.fdw_private as an executable expression but that seems
likely to break existing FDWs unnecessarily (in particular, it would
restrict the set of node types allowable in fdw_private to those supported
by expression_tree_walker).  Instead, invent a separate field fdw_exprs
which will receive the postprocessing appropriate for expression trees.
(One field is enough since it can be a list of expressions; also, we assume
the corresponding expression state tree(s) will be held within fdw_state,
so we don't need to add anything to ForeignScanState.)

Per review of Hanada Shigeru's pgsql_fdw patch.  We may need to tweak this
further as we continue to work on that patch, but to me it feels a lot
closer to being right now.
2012-03-09 12:49:25 -05:00