It'd be nice to be able to spell Jan Urbanski's name with the correct
accent marks, but we haven't yet found a way that works in everybody's
docs toolchain. This way definitely doesn't.
Create separate appendixes for contrib extensions and other server
plugins on the one hand, and utility programs on the other. Recast
the documentation of the latter as refentries, so that man pages are
generated.
This adds the variable COMP_KEYWORD_CASE, which controls in what case
keywords are completed. This is partially to let users configure the
change from commit 69f4f1c357, but it
also offers more behaviors than were available before.
Commit 62c7bd31c8 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
The default for the choice attribute of the <arg> element is "opt",
which would normally put the argument inside brackets. But the DSSSL
stylesheets contain a hack that treats <arg> directly inside <group>
specially, so that <group><arg>-x</arg><arg>-y</arg></group> comes out
as [ -x | -y ] rather than [ [-x] | [-y] ], which it would technically
be. But when building man pages, this doesn't work, and so the
command synopses on the man pages contain lots of extra brackets.
By putting choice="opt" or choice="plain" explicitly on every <arg>
and <group> element, we avoid any toolchain dependencies like that,
and it also makes it clearer in the source code what is meant.
In passing, make some small corrections in the documentation about
which arguments are really optional or not.
Remove the following ports:
- dgux
- nextstep
- sunos4
- svr4
- ultrix4
- univel
These are obsolete and not worth rescuing. In most cases, there is
circumstantial evidence that they wouldn't work anymore anyway.
Add more markup in particular so that the command options appear
consistently in monospace in the HTML output.
On the vacuumdb reference page, remove listing all the possible
options in the synopsis. They have become too many now; we have the
detailed options list for that.
We had changed this from the default bold to monospace for all output
formats, but for man pages, this creates visual inconsistencies, so
revert to the default for man pages.
This patch adjusts the core statistics views to match the decision already
taken for pg_stat_statements, that values representing elapsed time should
be represented as float8 and measured in milliseconds. By using float8,
we are no longer tied to a specific maximum precision of timing data.
(Internally, it's still microseconds, but we could now change that without
needing changes at the SQL level.)
The columns affected are
pg_stat_bgwriter.checkpoint_write_time
pg_stat_bgwriter.checkpoint_sync_time
pg_stat_database.blk_read_time
pg_stat_database.blk_write_time
pg_stat_user_functions.total_time
pg_stat_user_functions.self_time
pg_stat_xact_user_functions.total_time
pg_stat_xact_user_functions.self_time
The first four of these are new in 9.2, so there is no compatibility issue
from changing them. The others require a release note comment that they
are now double precision (and can show a fractional part) rather than
bigint as before; also their underlying statistics functions now match
the column definitions, instead of returning bigint microseconds.
Get rid of the per-column documentation of underlying functions, which did
far more to clutter the view descriptions than it did to be helpful, and
was rather incomplete and typo-ridden anyway. Instead suggest that people
consult the definitions of the standard views to see the underlying
functions.
The older functions for obtaining individual facts about backends are now
somewhat obsoleted by pg_stat_get_activity, which means that they are not
documented by any standard view. So I put that information into a separate
table. (Maybe we should just deprecate them instead?)
In passing, fix a couple more documentation errors.
Display total time and I/O timings in milliseconds, for consistency with
the units used for timings in the core statistics views. The columns
remain of float8 type, so that sub-msec precision is available. (At some
point we will probably want to convert the core views to use float8 type
for the same reason, but this patch does not touch that issue.)
This is a release-note-requiring change in the meaning of the total_time
column. The I/O timing columns are new as of 9.2, so there is no
compatibility impact from redefining them.
Do some minor copy-editing in the documentation, too.
Get rid of section 8.5.6 (Date/Time Internals), which appears to confuse
people more than it helps, and anyway discussion of Postgres' internal
datetime calculation methods seems pretty out of place here. Instead,
make datatype.sgml just say that we follow the Gregorian calendar (a bit
of specification not previously present anywhere in that chapter :-()
and link to the History of Units appendix for more info. Do some mild
editorialization on that appendix, too, to make it clearer that we are
following proleptic Gregorian calendar rules rather than anything more
historically accurate.
Per a question from Florence Cousin and subsequent discussion in
pgsql-docs.
Prohibiting this outright would break dumps taken from older versions
that contain such casts, which would create far more pain than is
justified here.
Per report by Jaime Casanova and subsequent discussion.
The original syntax wasn't universally loved, and it didn't allow its
usage in CREATE TABLE, only ALTER TABLE. It now works everywhere, and
it also allows using ALTER TABLE ONLY to add an uninherited CHECK
constraint, per discussion.
The pg_constraint column has accordingly been renamed connoinherit.
This commit partly reverts some of the changes in
61d81bd28d, particularly some pg_dump and
psql bits, because now pg_get_constraintdef includes the necessary NO
INHERIT within the constraint definition.
Author: Nikhil Sontakke
Some tweaks by me
The result object methods colnames() etc. would crash when called
after a command that did not produce a result set. Now they throw an
exception.
discovery and initial patch by Jean-Baptiste Quenot
The output of the new pg_xlog_location_diff function is of type numeric,
since it could theoretically overflow an int8 due to signedness; this
provides a convenient way to format such values.
Fujii Masao, with some beautification by me.
Per mailing list discussion, we would like to keep the bytea functions
parallel to the text functions, so rename bytea_agg to string_agg,
which already exists for text.
Also, to satisfy the rule that we don't want aggregate functions of
the same name with a different number of arguments, add a delimiter
argument, just like string_agg for text already has.
Previously we attempted to throw an error or at least warning for missing
schemas, but this was done inconsistently because of implementation
restrictions (in many cases, GUC settings are applied outside transactions
so that we can't do system catalog lookups). Furthermore, there were
exceptions to the rule even in the beginning, and we'd been poking more
and more holes in it as time went on, because it turns out that there are
lots of use-cases for having some irrelevant items in a common search_path
value. It seems better to just adopt a philosophy similar to what's always
been done with Unix PATH settings, wherein nonexistent or unreadable
directories are silently ignored.
This commit also fixes the documentation to point out that schemas for
which the user lacks USAGE privilege are silently ignored. That's always
been true but was previously not documented.
This is mostly in response to Robert Haas' complaint that 9.1 started to
throw errors or warnings for missing schemas in cases where prior releases
had not. We won't adopt such a significant behavioral change in a back
branch, so something different will be needed in 9.1.
postgres:// URIs are an attempt to "stop the bleeding" in this general
area that has been said to occur due to external projects adopting their
own syntaxes. The syntaxes supported by this patch:
postgres://[user[:pwd]@][unix-socket][:port[/dbname]][?param1=value1&...]
postgres://[user[:pwd]@][net-location][:port][/dbname][?param1=value1&...]
should be enough to cover most interesting cases without having to
resort to "param=value" pairs, but those are provided for the cases that
need them regardless.
libpq documentation has been shuffled around a bit, to avoid stuffing
all the format details into the PQconnectdbParams description, which was
already a bit overwhelming. The list of keywords has moved to its own
subsection, and the details on the URI format live in another subsection.
This includes a simple test program, as requested in discussion, to
ensure that interesting corner cases continue to work appropriately in
the future.
Author: Alexander Shulgin
Some tweaking by Álvaro Herrera, Greg Smith, Daniel Farina, Peter Eisentraut
Reviewed by Robert Haas, Alexey Klyukin (offlist), Heikki Linnakangas,
Marko Kreen, and others
Oh, it also supports postgresql:// but that's probably just an accident.
This patch reverts commit 191ef2b407
and thereby restores the pre-7.3 behavior of EXTRACT(EPOCH FROM
timestamp-without-tz). Per discussion, the more recent behavior was
misguided on a couple of grounds: it makes it hard to get a
non-timezone-aware epoch value for a timestamp, and it makes this one
case dependent on the value of the timezone GUC, which is incompatible
with having timestamp_part() labeled as immutable.
The other behavior is still available (in all releases) by explicitly
casting the timestamp to timestamp with time zone before applying EXTRACT.
This will need to be called out as an incompatible change in the 9.2
release notes. Although having mutable behavior in a function marked
immutable is clearly a bug, we're not going to back-patch such a change.
It's still non-deterministic in some sense ... but given fixed settings
and identical planning problems, it will now always choose the same plan,
so we probably shouldn't tar it with that brush. Per bug #6565 from
Guillaume Cottenceau. Back-patch to 9.0 where the behavior was fixed.
If we make the initially-called function return the table physical-size
estimate, acquire_inherited_sample_rows will be able to use that to
allocate numbers of samples among child tables, when the day comes that
we want to support foreign tables in inheritance trees.
ANALYZE now accepts foreign tables and allows the table's FDW to control
how the sample rows are collected. (But only manual ANALYZEs will touch
foreign tables, for the moment, since among other things it's not very
clear how to handle remote permissions checks in an auto-analyze.)
contrib/file_fdw is extended to support this.
Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
Ants Aasma's original patch to add timing information for buffer I/O
requests exposed this data at the relation level, which was judged too
costly. I've here exposed it at the database level instead.
This patch provides a test case for libpq's row processor API.
contrib/dblink can deal with very large result sets by dumping them into
a tuplestore (which can spill to disk) --- but until now, the intermediate
storage of the query result in a PGresult meant memory bloat for any large
result. Now we use a row processor to convert the data to tuple form and
dump it directly into the tuplestore.
A limitation is that this only works for plain dblink() queries, not
dblink_send_query() followed by dblink_get_result(). In the latter
case we don't know the desired tuple rowtype soon enough. While hack
solutions to that are possible, a different user-level API would
probably be a better answer.
Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
Traditionally libpq has collected an entire query result before passing
it back to the application. That provides a simple and transactional API,
but it's pretty inefficient for large result sets. This patch allows the
application to process each row on-the-fly instead of accumulating the
rows into the PGresult. Error recovery becomes a bit more complex, but
often that tradeoff is well worth making.
Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
pg_stat_statements now hashes selected fields of the analyzed parse tree
to assign a "fingerprint" to each query, and groups all queries with the
same fingerprint into a single entry in the pg_stat_statements view.
In practice it is expected that queries with the same fingerprint will be
equivalent except for values of literal constants. To make the display
more useful, such constants are replaced by "?" in the displayed query
strings.
This mechanism currently supports only optimizable queries (SELECT,
INSERT, UPDATE, DELETE). Utility commands are still matched on the
basis of their literal query strings.
There remain some open questions about how to deal with utility statements
that contain optimizable queries (such as EXPLAIN and SELECT INTO) and how
to deal with expiring speculative hashtable entries that are made to save
the normalized form of a query string. However, fixing these issues should
require only localized changes, and since there are other open patches
involving contrib/pg_stat_statements, it seems best to go ahead and commit
what we've got.
Peter Geoghegan, reviewed by Daniel Farina
Currently, the only way to see the numbers this gathers is via
EXPLAIN (ANALYZE, BUFFERS), but the plan is to add visibility through
the stats collector and pg_stat_statements in subsequent patches.
Ants Aasma, reviewed by Greg Smith, with some further changes by me.
Fix loss of previous expression-simplification work when a transform
function fires: we must not simply revert to untransformed input tree.
Instead build a dummy FuncExpr node to pass to the transform function.
This has the additional advantage of providing a simpler, more uniform
API for transform functions.
Move documentation to a somewhat less buried spot, relocate some
poorly-placed code, be more wary of null constants and invalid typmod
values, add an opr_sanity check on protransform function signatures,
and some other minor cosmetic adjustments.
Note: although this patch touches pg_proc.h, no need for catversion
bump, because the changes are cosmetic and don't actually change the
intended catalog contents.
Per a suggestion from Euler Taveira, it seems like a good idea to include
this information in \du (and \dg) output. This costs nothing for people
who are not using the VALID UNTIL feature, while for those who are, it's
rather critical information.
Fabrízio de Royes Mello
PGAC_PATH_COLLATEINDEX supposed that it could use AC_PATH_PROGS to search
for collateindex.pl, but that macro will only accept files that are marked
executable, and at least some DocBook installations don't mark the script
executable (a case the docs Makefile was already prepared for). Accept the
script if it's present and readable in $DOCBOOKSTYLE/bin, and otherwise
search the PATH as before.
Having fixed that up, we don't need the fallback case that was in the docs
Makefile, and instead can throw an understandable error if configure didn't
find the script. Per recent trouble report from John Lumby.
Document that routine vacuuming is now also important for the purpose
of index-only scans; and mention in the section that describes the
visibility map that it is used to implement index-only scans.
Marti Raudsepp, with some changes by me.
Instead of just stopping after removing an arbitrary subset of orphaned
large objects, commit and start a new transaction after each -l objects.
This is just as effective as the original patch at limiting the number of
locks used, and it doesn't require doing the OID collection process
repeatedly to get everything. Since the option no longer changes the
fundamental behavior of vacuumlo, and it avoids a known server-side
limitation, enable it by default (with a default limit of 1000 LOs per
transaction).
In passing, be more careful about properly quoting the names of tables
and fields, and do some other cosmetic cleanup.
This is intended as infrastructure to allow sepgsql to cooperate with
connection pooling software, by allowing the effective security label
to be set for each new connection.
KaiGai Kohei, reviewed by Yeb Havinga.
add ability to control permissions of created files
have psql echo its queries for easier debugging
output four separate log files, and delete them on success
add -r/--retain option to keep log files after success
make logs file append-only
remove -g/-G/-l logging options
sugggest tailing appropriate log file on failure
enhance -v/--verbose behavior